2026-02-02 02:12:56.529159 | Job console starting 2026-02-02 02:12:56.549499 | Updating git repos 2026-02-02 02:12:56.619226 | Cloning repos into workspace 2026-02-02 02:12:56.832774 | Restoring repo states 2026-02-02 02:12:56.852138 | Merging changes 2026-02-02 02:12:56.852165 | Checking out repos 2026-02-02 02:12:57.126761 | Preparing playbooks 2026-02-02 02:12:57.768614 | Running Ansible setup 2026-02-02 02:13:02.250228 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-02 02:13:03.006547 | 2026-02-02 02:13:03.006743 | PLAY [Base pre] 2026-02-02 02:13:03.024367 | 2026-02-02 02:13:03.024502 | TASK [Setup log path fact] 2026-02-02 02:13:03.055079 | orchestrator | ok 2026-02-02 02:13:03.073275 | 2026-02-02 02:13:03.073407 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-02 02:13:03.117280 | orchestrator | ok 2026-02-02 02:13:03.130562 | 2026-02-02 02:13:03.130675 | TASK [emit-job-header : Print job information] 2026-02-02 02:13:03.170298 | # Job Information 2026-02-02 02:13:03.170466 | Ansible Version: 2.16.14 2026-02-02 02:13:03.170500 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-02 02:13:03.170533 | Pipeline: periodic-midnight 2026-02-02 02:13:03.170556 | Executor: 521e9411259a 2026-02-02 02:13:03.170576 | Triggered by: https://github.com/osism/testbed 2026-02-02 02:13:03.170598 | Event ID: c92ce9b72d3f4113a0422af615ebd436 2026-02-02 02:13:03.178209 | 2026-02-02 02:13:03.178329 | LOOP [emit-job-header : Print node information] 2026-02-02 02:13:03.300768 | orchestrator | ok: 2026-02-02 02:13:03.301025 | orchestrator | # Node Information 2026-02-02 02:13:03.301070 | orchestrator | Inventory Hostname: orchestrator 2026-02-02 02:13:03.301104 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-02 02:13:03.301134 | orchestrator | Username: zuul-testbed03 2026-02-02 02:13:03.301161 | orchestrator | Distro: Debian 12.13 2026-02-02 02:13:03.301194 | orchestrator | Provider: static-testbed 2026-02-02 02:13:03.301223 | orchestrator | Region: 2026-02-02 02:13:03.301250 | orchestrator | Label: testbed-orchestrator 2026-02-02 02:13:03.301277 | orchestrator | Product Name: OpenStack Nova 2026-02-02 02:13:03.301305 | orchestrator | Interface IP: 81.163.193.140 2026-02-02 02:13:03.336026 | 2026-02-02 02:13:03.336212 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-02 02:13:03.781021 | orchestrator -> localhost | changed 2026-02-02 02:13:03.789291 | 2026-02-02 02:13:03.789413 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-02 02:13:04.829160 | orchestrator -> localhost | changed 2026-02-02 02:13:04.844123 | 2026-02-02 02:13:04.844257 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-02 02:13:05.151939 | orchestrator -> localhost | ok 2026-02-02 02:13:05.159307 | 2026-02-02 02:13:05.159436 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-02 02:13:05.188619 | orchestrator | ok 2026-02-02 02:13:05.205116 | orchestrator | included: /var/lib/zuul/builds/13c99b7f5ab0455e81e88aef51d00270/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-02 02:13:05.213387 | 2026-02-02 02:13:05.213500 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-02 02:13:06.192191 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-02 02:13:06.192415 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/13c99b7f5ab0455e81e88aef51d00270/work/13c99b7f5ab0455e81e88aef51d00270_id_rsa 2026-02-02 02:13:06.192455 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/13c99b7f5ab0455e81e88aef51d00270/work/13c99b7f5ab0455e81e88aef51d00270_id_rsa.pub 2026-02-02 02:13:06.192482 | orchestrator -> localhost | The key fingerprint is: 2026-02-02 02:13:06.192507 | orchestrator -> localhost | SHA256:82JVk323RWMAzM2T2Cb6zzp9jfas09hSAgYoDM600ys zuul-build-sshkey 2026-02-02 02:13:06.192530 | orchestrator -> localhost | The key's randomart image is: 2026-02-02 02:13:06.192566 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-02 02:13:06.192588 | orchestrator -> localhost | | oo .o.*.oo.| 2026-02-02 02:13:06.192609 | orchestrator -> localhost | | + oo . .= X...| 2026-02-02 02:13:06.192629 | orchestrator -> localhost | | = .. ..* o +| 2026-02-02 02:13:06.192649 | orchestrator -> localhost | | . . . .o. .+| 2026-02-02 02:13:06.192669 | orchestrator -> localhost | | E . S o. . . | 2026-02-02 02:13:06.192716 | orchestrator -> localhost | | . + . . .| 2026-02-02 02:13:06.192739 | orchestrator -> localhost | | o . + B.| 2026-02-02 02:13:06.192760 | orchestrator -> localhost | | . . . +*o+| 2026-02-02 02:13:06.192781 | orchestrator -> localhost | | .o.o=o| 2026-02-02 02:13:06.192802 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-02 02:13:06.192861 | orchestrator -> localhost | ok: Runtime: 0:00:00.490863 2026-02-02 02:13:06.200739 | 2026-02-02 02:13:06.200853 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-02 02:13:06.220398 | orchestrator | ok 2026-02-02 02:13:06.230486 | orchestrator | included: /var/lib/zuul/builds/13c99b7f5ab0455e81e88aef51d00270/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-02 02:13:06.239836 | 2026-02-02 02:13:06.239940 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-02 02:13:06.263380 | orchestrator | skipping: Conditional result was False 2026-02-02 02:13:06.271090 | 2026-02-02 02:13:06.271192 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-02 02:13:06.896171 | orchestrator | changed 2026-02-02 02:13:06.905193 | 2026-02-02 02:13:06.905330 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-02 02:13:07.204372 | orchestrator | ok 2026-02-02 02:13:07.212553 | 2026-02-02 02:13:07.212685 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-02 02:13:07.702949 | orchestrator | ok 2026-02-02 02:13:07.710442 | 2026-02-02 02:13:07.710573 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-02 02:13:08.197074 | orchestrator | ok 2026-02-02 02:13:08.205471 | 2026-02-02 02:13:08.205583 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-02 02:13:08.229254 | orchestrator | skipping: Conditional result was False 2026-02-02 02:13:08.236935 | 2026-02-02 02:13:08.237038 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-02 02:13:08.678165 | orchestrator -> localhost | changed 2026-02-02 02:13:08.705534 | 2026-02-02 02:13:08.705717 | TASK [add-build-sshkey : Add back temp key] 2026-02-02 02:13:09.046104 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/13c99b7f5ab0455e81e88aef51d00270/work/13c99b7f5ab0455e81e88aef51d00270_id_rsa (zuul-build-sshkey) 2026-02-02 02:13:09.046665 | orchestrator -> localhost | ok: Runtime: 0:00:00.016826 2026-02-02 02:13:09.060028 | 2026-02-02 02:13:09.060158 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-02 02:13:09.540840 | orchestrator | ok 2026-02-02 02:13:09.552305 | 2026-02-02 02:13:09.552485 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-02 02:13:09.580084 | orchestrator | skipping: Conditional result was False 2026-02-02 02:13:09.627477 | 2026-02-02 02:13:09.627606 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-02 02:13:10.061022 | orchestrator | ok 2026-02-02 02:13:10.077935 | 2026-02-02 02:13:10.078064 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-02 02:13:10.123587 | orchestrator | ok 2026-02-02 02:13:10.133387 | 2026-02-02 02:13:10.133507 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-02 02:13:10.440860 | orchestrator -> localhost | ok 2026-02-02 02:13:10.448603 | 2026-02-02 02:13:10.448775 | TASK [validate-host : Collect information about the host] 2026-02-02 02:13:12.736326 | orchestrator | ok 2026-02-02 02:13:12.751750 | 2026-02-02 02:13:12.751886 | TASK [validate-host : Sanitize hostname] 2026-02-02 02:13:12.818479 | orchestrator | ok 2026-02-02 02:13:12.826377 | 2026-02-02 02:13:12.826506 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-02 02:13:13.453496 | orchestrator -> localhost | changed 2026-02-02 02:13:13.460317 | 2026-02-02 02:13:13.460433 | TASK [validate-host : Collect information about zuul worker] 2026-02-02 02:13:13.917566 | orchestrator | ok 2026-02-02 02:13:13.926389 | 2026-02-02 02:13:13.926547 | TASK [validate-host : Write out all zuul information for each host] 2026-02-02 02:13:14.509022 | orchestrator -> localhost | changed 2026-02-02 02:13:14.529288 | 2026-02-02 02:13:14.529430 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-02 02:13:14.859942 | orchestrator | ok 2026-02-02 02:13:14.869251 | 2026-02-02 02:13:14.869377 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-02 02:13:38.111073 | orchestrator | changed: 2026-02-02 02:13:38.111365 | orchestrator | .d..t...... src/ 2026-02-02 02:13:38.111415 | orchestrator | .d..t...... src/github.com/ 2026-02-02 02:13:38.111452 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-02 02:13:38.111484 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-02 02:13:38.111513 | orchestrator | RedHat.yml 2026-02-02 02:13:38.127406 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-02 02:13:38.127424 | orchestrator | RedHat.yml 2026-02-02 02:13:38.127476 | orchestrator | = 1.53.0"... 2026-02-02 02:13:51.194358 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-02 02:13:51.351259 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-02 02:13:51.994725 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-02 02:13:52.623752 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-02 02:13:53.533589 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-02 02:13:54.042271 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-02 02:13:54.572195 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-02 02:13:54.572304 | orchestrator | 2026-02-02 02:13:54.572322 | orchestrator | Providers are signed by their developers. 2026-02-02 02:13:54.572336 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-02 02:13:54.572348 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-02 02:13:54.572364 | orchestrator | 2026-02-02 02:13:54.572377 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-02 02:13:54.572388 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-02 02:13:54.572419 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-02 02:13:54.572432 | orchestrator | you run "tofu init" in the future. 2026-02-02 02:13:54.572674 | orchestrator | 2026-02-02 02:13:54.572741 | orchestrator | OpenTofu has been successfully initialized! 2026-02-02 02:13:54.572760 | orchestrator | 2026-02-02 02:13:54.572772 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-02 02:13:54.572783 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-02 02:13:54.572795 | orchestrator | should now work. 2026-02-02 02:13:54.572807 | orchestrator | 2026-02-02 02:13:54.572818 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-02 02:13:54.572830 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-02 02:13:54.572841 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-02 02:13:54.742465 | orchestrator | Created and switched to workspace "ci"! 2026-02-02 02:13:54.742639 | orchestrator | 2026-02-02 02:13:54.742670 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-02 02:13:54.742693 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-02 02:13:54.742713 | orchestrator | for this configuration. 2026-02-02 02:13:54.892749 | orchestrator | ci.auto.tfvars 2026-02-02 02:13:54.895156 | orchestrator | default_custom.tf 2026-02-02 02:13:55.904895 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-02 02:13:56.440145 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-02 02:13:56.730920 | orchestrator | 2026-02-02 02:13:56.731022 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-02 02:13:56.731032 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-02 02:13:56.731037 | orchestrator | + create 2026-02-02 02:13:56.731042 | orchestrator | <= read (data resources) 2026-02-02 02:13:56.731047 | orchestrator | 2026-02-02 02:13:56.731052 | orchestrator | OpenTofu will perform the following actions: 2026-02-02 02:13:56.731065 | orchestrator | 2026-02-02 02:13:56.731069 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-02 02:13:56.731074 | orchestrator | # (config refers to values not yet known) 2026-02-02 02:13:56.731078 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-02 02:13:56.731082 | orchestrator | + checksum = (known after apply) 2026-02-02 02:13:56.731086 | orchestrator | + created_at = (known after apply) 2026-02-02 02:13:56.731090 | orchestrator | + file = (known after apply) 2026-02-02 02:13:56.731094 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.731121 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.731125 | orchestrator | + min_disk_gb = (known after apply) 2026-02-02 02:13:56.731129 | orchestrator | + min_ram_mb = (known after apply) 2026-02-02 02:13:56.731133 | orchestrator | + most_recent = true 2026-02-02 02:13:56.731137 | orchestrator | + name = (known after apply) 2026-02-02 02:13:56.731141 | orchestrator | + protected = (known after apply) 2026-02-02 02:13:56.731145 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.731152 | orchestrator | + schema = (known after apply) 2026-02-02 02:13:56.731155 | orchestrator | + size_bytes = (known after apply) 2026-02-02 02:13:56.731159 | orchestrator | + tags = (known after apply) 2026-02-02 02:13:56.731163 | orchestrator | + updated_at = (known after apply) 2026-02-02 02:13:56.731167 | orchestrator | } 2026-02-02 02:13:56.731173 | orchestrator | 2026-02-02 02:13:56.731177 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-02 02:13:56.731181 | orchestrator | # (config refers to values not yet known) 2026-02-02 02:13:56.731185 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-02 02:13:56.731189 | orchestrator | + checksum = (known after apply) 2026-02-02 02:13:56.731193 | orchestrator | + created_at = (known after apply) 2026-02-02 02:13:56.731197 | orchestrator | + file = (known after apply) 2026-02-02 02:13:56.731200 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.731204 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.731208 | orchestrator | + min_disk_gb = (known after apply) 2026-02-02 02:13:56.731211 | orchestrator | + min_ram_mb = (known after apply) 2026-02-02 02:13:56.731215 | orchestrator | + most_recent = true 2026-02-02 02:13:56.731219 | orchestrator | + name = (known after apply) 2026-02-02 02:13:56.731223 | orchestrator | + protected = (known after apply) 2026-02-02 02:13:56.731227 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.731231 | orchestrator | + schema = (known after apply) 2026-02-02 02:13:56.731234 | orchestrator | + size_bytes = (known after apply) 2026-02-02 02:13:56.731238 | orchestrator | + tags = (known after apply) 2026-02-02 02:13:56.731242 | orchestrator | + updated_at = (known after apply) 2026-02-02 02:13:56.731246 | orchestrator | } 2026-02-02 02:13:56.731294 | orchestrator | 2026-02-02 02:13:56.731299 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-02 02:13:56.731304 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-02 02:13:56.731308 | orchestrator | + content = (known after apply) 2026-02-02 02:13:56.731312 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-02 02:13:56.731317 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-02 02:13:56.731321 | orchestrator | + content_md5 = (known after apply) 2026-02-02 02:13:56.731325 | orchestrator | + content_sha1 = (known after apply) 2026-02-02 02:13:56.731328 | orchestrator | + content_sha256 = (known after apply) 2026-02-02 02:13:56.731332 | orchestrator | + content_sha512 = (known after apply) 2026-02-02 02:13:56.731336 | orchestrator | + directory_permission = "0777" 2026-02-02 02:13:56.731340 | orchestrator | + file_permission = "0644" 2026-02-02 02:13:56.731344 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-02 02:13:56.731348 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.731352 | orchestrator | } 2026-02-02 02:13:56.731395 | orchestrator | 2026-02-02 02:13:56.731400 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-02 02:13:56.731404 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-02 02:13:56.731408 | orchestrator | + content = (known after apply) 2026-02-02 02:13:56.731411 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-02 02:13:56.731415 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-02 02:13:56.731419 | orchestrator | + content_md5 = (known after apply) 2026-02-02 02:13:56.731423 | orchestrator | + content_sha1 = (known after apply) 2026-02-02 02:13:56.731427 | orchestrator | + content_sha256 = (known after apply) 2026-02-02 02:13:56.731430 | orchestrator | + content_sha512 = (known after apply) 2026-02-02 02:13:56.731434 | orchestrator | + directory_permission = "0777" 2026-02-02 02:13:56.731438 | orchestrator | + file_permission = "0644" 2026-02-02 02:13:56.731447 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-02 02:13:56.731451 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.731455 | orchestrator | } 2026-02-02 02:13:56.731559 | orchestrator | 2026-02-02 02:13:56.731575 | orchestrator | # local_file.inventory will be created 2026-02-02 02:13:56.731579 | orchestrator | + resource "local_file" "inventory" { 2026-02-02 02:13:56.731583 | orchestrator | + content = (known after apply) 2026-02-02 02:13:56.731587 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-02 02:13:56.731591 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-02 02:13:56.731595 | orchestrator | + content_md5 = (known after apply) 2026-02-02 02:13:56.731598 | orchestrator | + content_sha1 = (known after apply) 2026-02-02 02:13:56.731602 | orchestrator | + content_sha256 = (known after apply) 2026-02-02 02:13:56.731606 | orchestrator | + content_sha512 = (known after apply) 2026-02-02 02:13:56.731610 | orchestrator | + directory_permission = "0777" 2026-02-02 02:13:56.731614 | orchestrator | + file_permission = "0644" 2026-02-02 02:13:56.731617 | orchestrator | + filename = "inventory.ci" 2026-02-02 02:13:56.731621 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.731625 | orchestrator | } 2026-02-02 02:13:56.731711 | orchestrator | 2026-02-02 02:13:56.731716 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-02 02:13:56.731720 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-02 02:13:56.731724 | orchestrator | + content = (sensitive value) 2026-02-02 02:13:56.731728 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-02 02:13:56.731732 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-02 02:13:56.731736 | orchestrator | + content_md5 = (known after apply) 2026-02-02 02:13:56.731740 | orchestrator | + content_sha1 = (known after apply) 2026-02-02 02:13:56.731743 | orchestrator | + content_sha256 = (known after apply) 2026-02-02 02:13:56.731747 | orchestrator | + content_sha512 = (known after apply) 2026-02-02 02:13:56.731751 | orchestrator | + directory_permission = "0700" 2026-02-02 02:13:56.731755 | orchestrator | + file_permission = "0600" 2026-02-02 02:13:56.731758 | orchestrator | + filename = ".id_rsa.ci" 2026-02-02 02:13:56.731762 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.731766 | orchestrator | } 2026-02-02 02:13:56.731771 | orchestrator | 2026-02-02 02:13:56.731775 | orchestrator | # null_resource.node_semaphore will be created 2026-02-02 02:13:56.731779 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-02 02:13:56.731783 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.731787 | orchestrator | } 2026-02-02 02:13:56.731862 | orchestrator | 2026-02-02 02:13:56.731868 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-02 02:13:56.731872 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-02 02:13:56.731875 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.731879 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.731883 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.731887 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.731891 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.731895 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-02 02:13:56.731899 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.731902 | orchestrator | + size = 80 2026-02-02 02:13:56.731906 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.731910 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.731914 | orchestrator | } 2026-02-02 02:13:56.731955 | orchestrator | 2026-02-02 02:13:56.731960 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-02 02:13:56.731964 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-02 02:13:56.731968 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.731971 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.731975 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.731984 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.731988 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.731992 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-02 02:13:56.731995 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.731999 | orchestrator | + size = 80 2026-02-02 02:13:56.732003 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.732007 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.732011 | orchestrator | } 2026-02-02 02:13:56.732041 | orchestrator | 2026-02-02 02:13:56.732046 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-02 02:13:56.732050 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-02 02:13:56.732054 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.732058 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.732062 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.732066 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.732070 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.732074 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-02 02:13:56.732077 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.732081 | orchestrator | + size = 80 2026-02-02 02:13:56.732085 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.732089 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.732093 | orchestrator | } 2026-02-02 02:13:56.732147 | orchestrator | 2026-02-02 02:13:56.732153 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-02 02:13:56.732157 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-02 02:13:56.732161 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.732164 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.732168 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.732172 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.732176 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.732179 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-02 02:13:56.732183 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.732187 | orchestrator | + size = 80 2026-02-02 02:13:56.732191 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.732194 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.732198 | orchestrator | } 2026-02-02 02:13:56.732204 | orchestrator | 2026-02-02 02:13:56.732208 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-02 02:13:56.732211 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-02 02:13:56.732215 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.732219 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.732223 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.732227 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.732230 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.732237 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-02 02:13:56.732241 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.732245 | orchestrator | + size = 80 2026-02-02 02:13:56.732249 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.732253 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.732256 | orchestrator | } 2026-02-02 02:13:56.732335 | orchestrator | 2026-02-02 02:13:56.732341 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-02 02:13:56.732345 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-02 02:13:56.732348 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.732352 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.732356 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.732363 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.732367 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.732371 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-02 02:13:56.732375 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.732379 | orchestrator | + size = 80 2026-02-02 02:13:56.732382 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.732386 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.732390 | orchestrator | } 2026-02-02 02:13:56.732395 | orchestrator | 2026-02-02 02:13:56.732399 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-02 02:13:56.732403 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-02 02:13:56.732407 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.732411 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.732415 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.732418 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.732422 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.732426 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-02 02:13:56.732430 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.732433 | orchestrator | + size = 80 2026-02-02 02:13:56.732437 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.732441 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.732445 | orchestrator | } 2026-02-02 02:13:56.732450 | orchestrator | 2026-02-02 02:13:56.732454 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-02 02:13:56.732458 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 02:13:56.732462 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.732466 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.732470 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.732473 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.732477 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-02 02:13:56.732495 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.732499 | orchestrator | + size = 20 2026-02-02 02:13:56.732503 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.732507 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.732510 | orchestrator | } 2026-02-02 02:13:56.732561 | orchestrator | 2026-02-02 02:13:56.732569 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-02 02:13:56.732573 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 02:13:56.732577 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.732580 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.732584 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.732588 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.732592 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-02 02:13:56.732595 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.732599 | orchestrator | + size = 20 2026-02-02 02:13:56.732603 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.732607 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.732611 | orchestrator | } 2026-02-02 02:13:56.732639 | orchestrator | 2026-02-02 02:13:56.732645 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-02 02:13:56.732648 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 02:13:56.732652 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.732656 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.732660 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.732664 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.732667 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-02 02:13:56.732671 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.732679 | orchestrator | + size = 20 2026-02-02 02:13:56.732682 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.732686 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.732690 | orchestrator | } 2026-02-02 02:13:56.732723 | orchestrator | 2026-02-02 02:13:56.732728 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-02 02:13:56.732732 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 02:13:56.732736 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.732740 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.732744 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.732747 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.732751 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-02 02:13:56.732755 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.732759 | orchestrator | + size = 20 2026-02-02 02:13:56.732763 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.732766 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.732770 | orchestrator | } 2026-02-02 02:13:56.732803 | orchestrator | 2026-02-02 02:13:56.732808 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-02 02:13:56.732812 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 02:13:56.732816 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.732819 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.732823 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.732827 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.732831 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-02 02:13:56.732834 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.732841 | orchestrator | + size = 20 2026-02-02 02:13:56.732845 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.732849 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.732853 | orchestrator | } 2026-02-02 02:13:56.732895 | orchestrator | 2026-02-02 02:13:56.732903 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-02 02:13:56.732907 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 02:13:56.732911 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.732915 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.732919 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.732922 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.732926 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-02 02:13:56.732930 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.732934 | orchestrator | + size = 20 2026-02-02 02:13:56.732937 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.732941 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.732945 | orchestrator | } 2026-02-02 02:13:56.732950 | orchestrator | 2026-02-02 02:13:56.732955 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-02 02:13:56.732958 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 02:13:56.732962 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.732966 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.732970 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.732973 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.732977 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-02 02:13:56.732981 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.732985 | orchestrator | + size = 20 2026-02-02 02:13:56.732988 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.732992 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.732996 | orchestrator | } 2026-02-02 02:13:56.733046 | orchestrator | 2026-02-02 02:13:56.733054 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-02 02:13:56.733058 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 02:13:56.733066 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.733070 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.733074 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.733077 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.733081 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-02 02:13:56.733085 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.733089 | orchestrator | + size = 20 2026-02-02 02:13:56.733093 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.733097 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.733101 | orchestrator | } 2026-02-02 02:13:56.733106 | orchestrator | 2026-02-02 02:13:56.733110 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-02 02:13:56.733114 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-02 02:13:56.733118 | orchestrator | + attachment = (known after apply) 2026-02-02 02:13:56.733122 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.733125 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.733129 | orchestrator | + metadata = (known after apply) 2026-02-02 02:13:56.733133 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-02 02:13:56.733137 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.733140 | orchestrator | + size = 20 2026-02-02 02:13:56.733144 | orchestrator | + volume_retype_policy = "never" 2026-02-02 02:13:56.733148 | orchestrator | + volume_type = "ssd" 2026-02-02 02:13:56.733152 | orchestrator | } 2026-02-02 02:13:56.733399 | orchestrator | 2026-02-02 02:13:56.733409 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-02 02:13:56.733413 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-02 02:13:56.733417 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 02:13:56.733421 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 02:13:56.733424 | orchestrator | + all_metadata = (known after apply) 2026-02-02 02:13:56.733428 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.733432 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.733436 | orchestrator | + config_drive = true 2026-02-02 02:13:56.733440 | orchestrator | + created = (known after apply) 2026-02-02 02:13:56.733443 | orchestrator | + flavor_id = (known after apply) 2026-02-02 02:13:56.733447 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-02 02:13:56.733451 | orchestrator | + force_delete = false 2026-02-02 02:13:56.733455 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 02:13:56.733458 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.733462 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.733466 | orchestrator | + image_name = (known after apply) 2026-02-02 02:13:56.733470 | orchestrator | + key_pair = "testbed" 2026-02-02 02:13:56.733473 | orchestrator | + name = "testbed-manager" 2026-02-02 02:13:56.733477 | orchestrator | + power_state = "active" 2026-02-02 02:13:56.733493 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.733497 | orchestrator | + security_groups = (known after apply) 2026-02-02 02:13:56.733501 | orchestrator | + stop_before_destroy = false 2026-02-02 02:13:56.733504 | orchestrator | + updated = (known after apply) 2026-02-02 02:13:56.733508 | orchestrator | + user_data = (sensitive value) 2026-02-02 02:13:56.733512 | orchestrator | 2026-02-02 02:13:56.733516 | orchestrator | + block_device { 2026-02-02 02:13:56.733520 | orchestrator | + boot_index = 0 2026-02-02 02:13:56.733524 | orchestrator | + delete_on_termination = false 2026-02-02 02:13:56.733531 | orchestrator | + destination_type = "volume" 2026-02-02 02:13:56.733535 | orchestrator | + multiattach = false 2026-02-02 02:13:56.733539 | orchestrator | + source_type = "volume" 2026-02-02 02:13:56.733543 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.733552 | orchestrator | } 2026-02-02 02:13:56.733556 | orchestrator | 2026-02-02 02:13:56.733560 | orchestrator | + network { 2026-02-02 02:13:56.733564 | orchestrator | + access_network = false 2026-02-02 02:13:56.733567 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 02:13:56.733571 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 02:13:56.733575 | orchestrator | + mac = (known after apply) 2026-02-02 02:13:56.733579 | orchestrator | + name = (known after apply) 2026-02-02 02:13:56.733582 | orchestrator | + port = (known after apply) 2026-02-02 02:13:56.733586 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.733590 | orchestrator | } 2026-02-02 02:13:56.733594 | orchestrator | } 2026-02-02 02:13:56.733669 | orchestrator | 2026-02-02 02:13:56.733675 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-02 02:13:56.733679 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-02 02:13:56.733683 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 02:13:56.733687 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 02:13:56.733691 | orchestrator | + all_metadata = (known after apply) 2026-02-02 02:13:56.733695 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.733698 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.733702 | orchestrator | + config_drive = true 2026-02-02 02:13:56.733706 | orchestrator | + created = (known after apply) 2026-02-02 02:13:56.733710 | orchestrator | + flavor_id = (known after apply) 2026-02-02 02:13:56.733713 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-02 02:13:56.733717 | orchestrator | + force_delete = false 2026-02-02 02:13:56.733721 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 02:13:56.733725 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.733729 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.733732 | orchestrator | + image_name = (known after apply) 2026-02-02 02:13:56.733736 | orchestrator | + key_pair = "testbed" 2026-02-02 02:13:56.733740 | orchestrator | + name = "testbed-node-0" 2026-02-02 02:13:56.733744 | orchestrator | + power_state = "active" 2026-02-02 02:13:56.733747 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.733751 | orchestrator | + security_groups = (known after apply) 2026-02-02 02:13:56.733755 | orchestrator | + stop_before_destroy = false 2026-02-02 02:13:56.733759 | orchestrator | + updated = (known after apply) 2026-02-02 02:13:56.733762 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-02 02:13:56.733766 | orchestrator | 2026-02-02 02:13:56.733770 | orchestrator | + block_device { 2026-02-02 02:13:56.733774 | orchestrator | + boot_index = 0 2026-02-02 02:13:56.733778 | orchestrator | + delete_on_termination = false 2026-02-02 02:13:56.733782 | orchestrator | + destination_type = "volume" 2026-02-02 02:13:56.733785 | orchestrator | + multiattach = false 2026-02-02 02:13:56.733789 | orchestrator | + source_type = "volume" 2026-02-02 02:13:56.733793 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.733797 | orchestrator | } 2026-02-02 02:13:56.733801 | orchestrator | 2026-02-02 02:13:56.733804 | orchestrator | + network { 2026-02-02 02:13:56.733808 | orchestrator | + access_network = false 2026-02-02 02:13:56.733812 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 02:13:56.733816 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 02:13:56.733820 | orchestrator | + mac = (known after apply) 2026-02-02 02:13:56.733823 | orchestrator | + name = (known after apply) 2026-02-02 02:13:56.733827 | orchestrator | + port = (known after apply) 2026-02-02 02:13:56.733831 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.733835 | orchestrator | } 2026-02-02 02:13:56.733839 | orchestrator | } 2026-02-02 02:13:56.733938 | orchestrator | 2026-02-02 02:13:56.733944 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-02 02:13:56.733948 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-02 02:13:56.733951 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 02:13:56.733966 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 02:13:56.733970 | orchestrator | + all_metadata = (known after apply) 2026-02-02 02:13:56.733974 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.733978 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.733981 | orchestrator | + config_drive = true 2026-02-02 02:13:56.733985 | orchestrator | + created = (known after apply) 2026-02-02 02:13:56.733989 | orchestrator | + flavor_id = (known after apply) 2026-02-02 02:13:56.733993 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-02 02:13:56.733996 | orchestrator | + force_delete = false 2026-02-02 02:13:56.734000 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 02:13:56.734004 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.734008 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.734011 | orchestrator | + image_name = (known after apply) 2026-02-02 02:13:56.734032 | orchestrator | + key_pair = "testbed" 2026-02-02 02:13:56.734036 | orchestrator | + name = "testbed-node-1" 2026-02-02 02:13:56.734040 | orchestrator | + power_state = "active" 2026-02-02 02:13:56.734044 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.734048 | orchestrator | + security_groups = (known after apply) 2026-02-02 02:13:56.734051 | orchestrator | + stop_before_destroy = false 2026-02-02 02:13:56.734055 | orchestrator | + updated = (known after apply) 2026-02-02 02:13:56.734059 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-02 02:13:56.734063 | orchestrator | 2026-02-02 02:13:56.734066 | orchestrator | + block_device { 2026-02-02 02:13:56.734070 | orchestrator | + boot_index = 0 2026-02-02 02:13:56.734074 | orchestrator | + delete_on_termination = false 2026-02-02 02:13:56.734078 | orchestrator | + destination_type = "volume" 2026-02-02 02:13:56.734081 | orchestrator | + multiattach = false 2026-02-02 02:13:56.734085 | orchestrator | + source_type = "volume" 2026-02-02 02:13:56.734089 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.734093 | orchestrator | } 2026-02-02 02:13:56.734096 | orchestrator | 2026-02-02 02:13:56.734100 | orchestrator | + network { 2026-02-02 02:13:56.734104 | orchestrator | + access_network = false 2026-02-02 02:13:56.734108 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 02:13:56.734111 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 02:13:56.734115 | orchestrator | + mac = (known after apply) 2026-02-02 02:13:56.734119 | orchestrator | + name = (known after apply) 2026-02-02 02:13:56.734123 | orchestrator | + port = (known after apply) 2026-02-02 02:13:56.734126 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.734130 | orchestrator | } 2026-02-02 02:13:56.734134 | orchestrator | } 2026-02-02 02:13:56.734227 | orchestrator | 2026-02-02 02:13:56.734232 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-02 02:13:56.734236 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-02 02:13:56.734240 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 02:13:56.734244 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 02:13:56.734248 | orchestrator | + all_metadata = (known after apply) 2026-02-02 02:13:56.734252 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.734259 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.734263 | orchestrator | + config_drive = true 2026-02-02 02:13:56.734267 | orchestrator | + created = (known after apply) 2026-02-02 02:13:56.734271 | orchestrator | + flavor_id = (known after apply) 2026-02-02 02:13:56.734274 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-02 02:13:56.734278 | orchestrator | + force_delete = false 2026-02-02 02:13:56.734282 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 02:13:56.734286 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.734290 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.734317 | orchestrator | + image_name = (known after apply) 2026-02-02 02:13:56.734321 | orchestrator | + key_pair = "testbed" 2026-02-02 02:13:56.734325 | orchestrator | + name = "testbed-node-2" 2026-02-02 02:13:56.734328 | orchestrator | + power_state = "active" 2026-02-02 02:13:56.734332 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.734336 | orchestrator | + security_groups = (known after apply) 2026-02-02 02:13:56.734340 | orchestrator | + stop_before_destroy = false 2026-02-02 02:13:56.734344 | orchestrator | + updated = (known after apply) 2026-02-02 02:13:56.734348 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-02 02:13:56.734351 | orchestrator | 2026-02-02 02:13:56.734355 | orchestrator | + block_device { 2026-02-02 02:13:56.734359 | orchestrator | + boot_index = 0 2026-02-02 02:13:56.734363 | orchestrator | + delete_on_termination = false 2026-02-02 02:13:56.734367 | orchestrator | + destination_type = "volume" 2026-02-02 02:13:56.734370 | orchestrator | + multiattach = false 2026-02-02 02:13:56.734374 | orchestrator | + source_type = "volume" 2026-02-02 02:13:56.734378 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.734382 | orchestrator | } 2026-02-02 02:13:56.734386 | orchestrator | 2026-02-02 02:13:56.734389 | orchestrator | + network { 2026-02-02 02:13:56.734393 | orchestrator | + access_network = false 2026-02-02 02:13:56.734397 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 02:13:56.734401 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 02:13:56.734405 | orchestrator | + mac = (known after apply) 2026-02-02 02:13:56.734408 | orchestrator | + name = (known after apply) 2026-02-02 02:13:56.734412 | orchestrator | + port = (known after apply) 2026-02-02 02:13:56.734416 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.734420 | orchestrator | } 2026-02-02 02:13:56.734423 | orchestrator | } 2026-02-02 02:13:56.734517 | orchestrator | 2026-02-02 02:13:56.734523 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-02 02:13:56.734527 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-02 02:13:56.734531 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 02:13:56.734535 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 02:13:56.734539 | orchestrator | + all_metadata = (known after apply) 2026-02-02 02:13:56.734542 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.734546 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.734550 | orchestrator | + config_drive = true 2026-02-02 02:13:56.734554 | orchestrator | + created = (known after apply) 2026-02-02 02:13:56.734558 | orchestrator | + flavor_id = (known after apply) 2026-02-02 02:13:56.734561 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-02 02:13:56.734565 | orchestrator | + force_delete = false 2026-02-02 02:13:56.734569 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 02:13:56.734573 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.734576 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.734580 | orchestrator | + image_name = (known after apply) 2026-02-02 02:13:56.734584 | orchestrator | + key_pair = "testbed" 2026-02-02 02:13:56.734588 | orchestrator | + name = "testbed-node-3" 2026-02-02 02:13:56.734591 | orchestrator | + power_state = "active" 2026-02-02 02:13:56.734595 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.734599 | orchestrator | + security_groups = (known after apply) 2026-02-02 02:13:56.734603 | orchestrator | + stop_before_destroy = false 2026-02-02 02:13:56.734606 | orchestrator | + updated = (known after apply) 2026-02-02 02:13:56.734610 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-02 02:13:56.734614 | orchestrator | 2026-02-02 02:13:56.734618 | orchestrator | + block_device { 2026-02-02 02:13:56.734625 | orchestrator | + boot_index = 0 2026-02-02 02:13:56.734629 | orchestrator | + delete_on_termination = false 2026-02-02 02:13:56.734632 | orchestrator | + destination_type = "volume" 2026-02-02 02:13:56.734640 | orchestrator | + multiattach = false 2026-02-02 02:13:56.734643 | orchestrator | + source_type = "volume" 2026-02-02 02:13:56.734647 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.734651 | orchestrator | } 2026-02-02 02:13:56.734655 | orchestrator | 2026-02-02 02:13:56.734658 | orchestrator | + network { 2026-02-02 02:13:56.734662 | orchestrator | + access_network = false 2026-02-02 02:13:56.734666 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 02:13:56.734670 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 02:13:56.734673 | orchestrator | + mac = (known after apply) 2026-02-02 02:13:56.734677 | orchestrator | + name = (known after apply) 2026-02-02 02:13:56.734681 | orchestrator | + port = (known after apply) 2026-02-02 02:13:56.734685 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.734688 | orchestrator | } 2026-02-02 02:13:56.734692 | orchestrator | } 2026-02-02 02:13:56.734762 | orchestrator | 2026-02-02 02:13:56.734768 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-02 02:13:56.734772 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-02 02:13:56.734776 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 02:13:56.734780 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 02:13:56.734784 | orchestrator | + all_metadata = (known after apply) 2026-02-02 02:13:56.734787 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.734791 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.734795 | orchestrator | + config_drive = true 2026-02-02 02:13:56.734799 | orchestrator | + created = (known after apply) 2026-02-02 02:13:56.734803 | orchestrator | + flavor_id = (known after apply) 2026-02-02 02:13:56.734807 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-02 02:13:56.734810 | orchestrator | + force_delete = false 2026-02-02 02:13:56.734814 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 02:13:56.734818 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.734822 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.734826 | orchestrator | + image_name = (known after apply) 2026-02-02 02:13:56.734830 | orchestrator | + key_pair = "testbed" 2026-02-02 02:13:56.734833 | orchestrator | + name = "testbed-node-4" 2026-02-02 02:13:56.734837 | orchestrator | + power_state = "active" 2026-02-02 02:13:56.734841 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.734845 | orchestrator | + security_groups = (known after apply) 2026-02-02 02:13:56.734849 | orchestrator | + stop_before_destroy = false 2026-02-02 02:13:56.734853 | orchestrator | + updated = (known after apply) 2026-02-02 02:13:56.734856 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-02 02:13:56.734860 | orchestrator | 2026-02-02 02:13:56.734864 | orchestrator | + block_device { 2026-02-02 02:13:56.734868 | orchestrator | + boot_index = 0 2026-02-02 02:13:56.734872 | orchestrator | + delete_on_termination = false 2026-02-02 02:13:56.734876 | orchestrator | + destination_type = "volume" 2026-02-02 02:13:56.734879 | orchestrator | + multiattach = false 2026-02-02 02:13:56.734883 | orchestrator | + source_type = "volume" 2026-02-02 02:13:56.734887 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.734891 | orchestrator | } 2026-02-02 02:13:56.734894 | orchestrator | 2026-02-02 02:13:56.734898 | orchestrator | + network { 2026-02-02 02:13:56.734902 | orchestrator | + access_network = false 2026-02-02 02:13:56.734906 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 02:13:56.734910 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 02:13:56.734914 | orchestrator | + mac = (known after apply) 2026-02-02 02:13:56.734917 | orchestrator | + name = (known after apply) 2026-02-02 02:13:56.734921 | orchestrator | + port = (known after apply) 2026-02-02 02:13:56.734925 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.734929 | orchestrator | } 2026-02-02 02:13:56.734933 | orchestrator | } 2026-02-02 02:13:56.735037 | orchestrator | 2026-02-02 02:13:56.735042 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-02 02:13:56.735046 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-02 02:13:56.735050 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-02 02:13:56.735054 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-02 02:13:56.735058 | orchestrator | + all_metadata = (known after apply) 2026-02-02 02:13:56.735062 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.735065 | orchestrator | + availability_zone = "nova" 2026-02-02 02:13:56.735069 | orchestrator | + config_drive = true 2026-02-02 02:13:56.735073 | orchestrator | + created = (known after apply) 2026-02-02 02:13:56.735077 | orchestrator | + flavor_id = (known after apply) 2026-02-02 02:13:56.735081 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-02 02:13:56.735084 | orchestrator | + force_delete = false 2026-02-02 02:13:56.735091 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-02 02:13:56.735095 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735099 | orchestrator | + image_id = (known after apply) 2026-02-02 02:13:56.735103 | orchestrator | + image_name = (known after apply) 2026-02-02 02:13:56.735106 | orchestrator | + key_pair = "testbed" 2026-02-02 02:13:56.735110 | orchestrator | + name = "testbed-node-5" 2026-02-02 02:13:56.735114 | orchestrator | + power_state = "active" 2026-02-02 02:13:56.735118 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735122 | orchestrator | + security_groups = (known after apply) 2026-02-02 02:13:56.735125 | orchestrator | + stop_before_destroy = false 2026-02-02 02:13:56.735129 | orchestrator | + updated = (known after apply) 2026-02-02 02:13:56.735133 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-02 02:13:56.735137 | orchestrator | 2026-02-02 02:13:56.735141 | orchestrator | + block_device { 2026-02-02 02:13:56.735144 | orchestrator | + boot_index = 0 2026-02-02 02:13:56.735148 | orchestrator | + delete_on_termination = false 2026-02-02 02:13:56.735152 | orchestrator | + destination_type = "volume" 2026-02-02 02:13:56.735156 | orchestrator | + multiattach = false 2026-02-02 02:13:56.735159 | orchestrator | + source_type = "volume" 2026-02-02 02:13:56.735163 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.735167 | orchestrator | } 2026-02-02 02:13:56.735171 | orchestrator | 2026-02-02 02:13:56.735174 | orchestrator | + network { 2026-02-02 02:13:56.735178 | orchestrator | + access_network = false 2026-02-02 02:13:56.735182 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-02 02:13:56.735186 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-02 02:13:56.735189 | orchestrator | + mac = (known after apply) 2026-02-02 02:13:56.735193 | orchestrator | + name = (known after apply) 2026-02-02 02:13:56.735197 | orchestrator | + port = (known after apply) 2026-02-02 02:13:56.735201 | orchestrator | + uuid = (known after apply) 2026-02-02 02:13:56.735204 | orchestrator | } 2026-02-02 02:13:56.735208 | orchestrator | } 2026-02-02 02:13:56.735215 | orchestrator | 2026-02-02 02:13:56.735219 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-02 02:13:56.735223 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-02 02:13:56.735226 | orchestrator | + fingerprint = (known after apply) 2026-02-02 02:13:56.735230 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735234 | orchestrator | + name = "testbed" 2026-02-02 02:13:56.735238 | orchestrator | + private_key = (sensitive value) 2026-02-02 02:13:56.735242 | orchestrator | + public_key = (known after apply) 2026-02-02 02:13:56.735245 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735249 | orchestrator | + user_id = (known after apply) 2026-02-02 02:13:56.735253 | orchestrator | } 2026-02-02 02:13:56.735257 | orchestrator | 2026-02-02 02:13:56.735260 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-02 02:13:56.735264 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 02:13:56.735271 | orchestrator | + device = (known after apply) 2026-02-02 02:13:56.735275 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735279 | orchestrator | + instance_id = (known after apply) 2026-02-02 02:13:56.735283 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735287 | orchestrator | + volume_id = (known after apply) 2026-02-02 02:13:56.735290 | orchestrator | } 2026-02-02 02:13:56.735294 | orchestrator | 2026-02-02 02:13:56.735298 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-02 02:13:56.735302 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 02:13:56.735306 | orchestrator | + device = (known after apply) 2026-02-02 02:13:56.735309 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735313 | orchestrator | + instance_id = (known after apply) 2026-02-02 02:13:56.735317 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735321 | orchestrator | + volume_id = (known after apply) 2026-02-02 02:13:56.735324 | orchestrator | } 2026-02-02 02:13:56.735328 | orchestrator | 2026-02-02 02:13:56.735332 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-02 02:13:56.735336 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 02:13:56.735340 | orchestrator | + device = (known after apply) 2026-02-02 02:13:56.735343 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735347 | orchestrator | + instance_id = (known after apply) 2026-02-02 02:13:56.735351 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735355 | orchestrator | + volume_id = (known after apply) 2026-02-02 02:13:56.735359 | orchestrator | } 2026-02-02 02:13:56.735362 | orchestrator | 2026-02-02 02:13:56.735368 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-02 02:13:56.735372 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 02:13:56.735376 | orchestrator | + device = (known after apply) 2026-02-02 02:13:56.735380 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735383 | orchestrator | + instance_id = (known after apply) 2026-02-02 02:13:56.735387 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735391 | orchestrator | + volume_id = (known after apply) 2026-02-02 02:13:56.735395 | orchestrator | } 2026-02-02 02:13:56.735398 | orchestrator | 2026-02-02 02:13:56.735402 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-02 02:13:56.735406 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 02:13:56.735410 | orchestrator | + device = (known after apply) 2026-02-02 02:13:56.735413 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735417 | orchestrator | + instance_id = (known after apply) 2026-02-02 02:13:56.735424 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735428 | orchestrator | + volume_id = (known after apply) 2026-02-02 02:13:56.735432 | orchestrator | } 2026-02-02 02:13:56.735436 | orchestrator | 2026-02-02 02:13:56.735440 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-02 02:13:56.735443 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 02:13:56.735447 | orchestrator | + device = (known after apply) 2026-02-02 02:13:56.735451 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735455 | orchestrator | + instance_id = (known after apply) 2026-02-02 02:13:56.735458 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735462 | orchestrator | + volume_id = (known after apply) 2026-02-02 02:13:56.735466 | orchestrator | } 2026-02-02 02:13:56.735470 | orchestrator | 2026-02-02 02:13:56.735474 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-02 02:13:56.735477 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 02:13:56.735516 | orchestrator | + device = (known after apply) 2026-02-02 02:13:56.735523 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735530 | orchestrator | + instance_id = (known after apply) 2026-02-02 02:13:56.735536 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735548 | orchestrator | + volume_id = (known after apply) 2026-02-02 02:13:56.735552 | orchestrator | } 2026-02-02 02:13:56.735556 | orchestrator | 2026-02-02 02:13:56.735560 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-02 02:13:56.735563 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 02:13:56.735567 | orchestrator | + device = (known after apply) 2026-02-02 02:13:56.735571 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735575 | orchestrator | + instance_id = (known after apply) 2026-02-02 02:13:56.735578 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735582 | orchestrator | + volume_id = (known after apply) 2026-02-02 02:13:56.735586 | orchestrator | } 2026-02-02 02:13:56.735592 | orchestrator | 2026-02-02 02:13:56.735596 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-02 02:13:56.735600 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-02 02:13:56.735604 | orchestrator | + device = (known after apply) 2026-02-02 02:13:56.735608 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735611 | orchestrator | + instance_id = (known after apply) 2026-02-02 02:13:56.735615 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735619 | orchestrator | + volume_id = (known after apply) 2026-02-02 02:13:56.735623 | orchestrator | } 2026-02-02 02:13:56.735626 | orchestrator | 2026-02-02 02:13:56.735630 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-02 02:13:56.735635 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-02 02:13:56.735639 | orchestrator | + fixed_ip = (known after apply) 2026-02-02 02:13:56.735643 | orchestrator | + floating_ip = (known after apply) 2026-02-02 02:13:56.735646 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735650 | orchestrator | + port_id = (known after apply) 2026-02-02 02:13:56.735654 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735658 | orchestrator | } 2026-02-02 02:13:56.735661 | orchestrator | 2026-02-02 02:13:56.735665 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-02 02:13:56.735669 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-02 02:13:56.735673 | orchestrator | + address = (known after apply) 2026-02-02 02:13:56.735677 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.735681 | orchestrator | + dns_domain = (known after apply) 2026-02-02 02:13:56.735684 | orchestrator | + dns_name = (known after apply) 2026-02-02 02:13:56.735688 | orchestrator | + fixed_ip = (known after apply) 2026-02-02 02:13:56.735692 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735696 | orchestrator | + pool = "public" 2026-02-02 02:13:56.735700 | orchestrator | + port_id = (known after apply) 2026-02-02 02:13:56.735704 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735707 | orchestrator | + subnet_id = (known after apply) 2026-02-02 02:13:56.735711 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.735715 | orchestrator | } 2026-02-02 02:13:56.735720 | orchestrator | 2026-02-02 02:13:56.735724 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-02 02:13:56.735728 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-02 02:13:56.735731 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 02:13:56.735735 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.735739 | orchestrator | + availability_zone_hints = [ 2026-02-02 02:13:56.735743 | orchestrator | + "nova", 2026-02-02 02:13:56.735747 | orchestrator | ] 2026-02-02 02:13:56.735751 | orchestrator | + dns_domain = (known after apply) 2026-02-02 02:13:56.735754 | orchestrator | + external = (known after apply) 2026-02-02 02:13:56.735758 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735762 | orchestrator | + mtu = (known after apply) 2026-02-02 02:13:56.735766 | orchestrator | + name = "net-testbed-management" 2026-02-02 02:13:56.735770 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 02:13:56.735778 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 02:13:56.735782 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735786 | orchestrator | + shared = (known after apply) 2026-02-02 02:13:56.735789 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.735793 | orchestrator | + transparent_vlan = (known after apply) 2026-02-02 02:13:56.735797 | orchestrator | 2026-02-02 02:13:56.735801 | orchestrator | + segments (known after apply) 2026-02-02 02:13:56.735805 | orchestrator | } 2026-02-02 02:13:56.735810 | orchestrator | 2026-02-02 02:13:56.735814 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-02 02:13:56.735818 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-02 02:13:56.735822 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 02:13:56.735825 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 02:13:56.735829 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 02:13:56.735836 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.735840 | orchestrator | + device_id = (known after apply) 2026-02-02 02:13:56.735843 | orchestrator | + device_owner = (known after apply) 2026-02-02 02:13:56.735847 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 02:13:56.735851 | orchestrator | + dns_name = (known after apply) 2026-02-02 02:13:56.735855 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.735858 | orchestrator | + mac_address = (known after apply) 2026-02-02 02:13:56.735862 | orchestrator | + network_id = (known after apply) 2026-02-02 02:13:56.735866 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 02:13:56.735870 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 02:13:56.735873 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.735877 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 02:13:56.735881 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.735885 | orchestrator | 2026-02-02 02:13:56.735888 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.735892 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 02:13:56.735896 | orchestrator | } 2026-02-02 02:13:56.735900 | orchestrator | 2026-02-02 02:13:56.735904 | orchestrator | + binding (known after apply) 2026-02-02 02:13:56.735907 | orchestrator | 2026-02-02 02:13:56.735911 | orchestrator | + fixed_ip { 2026-02-02 02:13:56.735915 | orchestrator | + ip_address = "192.168.16.5" 2026-02-02 02:13:56.735919 | orchestrator | + subnet_id = (known after apply) 2026-02-02 02:13:56.735923 | orchestrator | } 2026-02-02 02:13:56.735926 | orchestrator | } 2026-02-02 02:13:56.736008 | orchestrator | 2026-02-02 02:13:56.736014 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-02 02:13:56.736018 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-02 02:13:56.736022 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 02:13:56.736026 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 02:13:56.736029 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 02:13:56.736033 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.736037 | orchestrator | + device_id = (known after apply) 2026-02-02 02:13:56.736041 | orchestrator | + device_owner = (known after apply) 2026-02-02 02:13:56.736045 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 02:13:56.736049 | orchestrator | + dns_name = (known after apply) 2026-02-02 02:13:56.736052 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.736056 | orchestrator | + mac_address = (known after apply) 2026-02-02 02:13:56.736060 | orchestrator | + network_id = (known after apply) 2026-02-02 02:13:56.736064 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 02:13:56.736068 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 02:13:56.736071 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.736079 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 02:13:56.736083 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.736087 | orchestrator | 2026-02-02 02:13:56.736091 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736095 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-02 02:13:56.736099 | orchestrator | } 2026-02-02 02:13:56.736102 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736106 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 02:13:56.736110 | orchestrator | } 2026-02-02 02:13:56.736114 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736118 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-02 02:13:56.736122 | orchestrator | } 2026-02-02 02:13:56.736125 | orchestrator | 2026-02-02 02:13:56.736129 | orchestrator | + binding (known after apply) 2026-02-02 02:13:56.736133 | orchestrator | 2026-02-02 02:13:56.736137 | orchestrator | + fixed_ip { 2026-02-02 02:13:56.736141 | orchestrator | + ip_address = "192.168.16.10" 2026-02-02 02:13:56.736145 | orchestrator | + subnet_id = (known after apply) 2026-02-02 02:13:56.736148 | orchestrator | } 2026-02-02 02:13:56.736152 | orchestrator | } 2026-02-02 02:13:56.736232 | orchestrator | 2026-02-02 02:13:56.736241 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-02 02:13:56.736245 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-02 02:13:56.736248 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 02:13:56.736252 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 02:13:56.736256 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 02:13:56.736260 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.736264 | orchestrator | + device_id = (known after apply) 2026-02-02 02:13:56.736267 | orchestrator | + device_owner = (known after apply) 2026-02-02 02:13:56.736271 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 02:13:56.736275 | orchestrator | + dns_name = (known after apply) 2026-02-02 02:13:56.736279 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.736283 | orchestrator | + mac_address = (known after apply) 2026-02-02 02:13:56.736286 | orchestrator | + network_id = (known after apply) 2026-02-02 02:13:56.736290 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 02:13:56.736294 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 02:13:56.736298 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.736302 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 02:13:56.736305 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.736309 | orchestrator | 2026-02-02 02:13:56.736313 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736317 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-02 02:13:56.736321 | orchestrator | } 2026-02-02 02:13:56.736324 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736328 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 02:13:56.736332 | orchestrator | } 2026-02-02 02:13:56.736336 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736340 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-02 02:13:56.736344 | orchestrator | } 2026-02-02 02:13:56.736347 | orchestrator | 2026-02-02 02:13:56.736351 | orchestrator | + binding (known after apply) 2026-02-02 02:13:56.736355 | orchestrator | 2026-02-02 02:13:56.736359 | orchestrator | + fixed_ip { 2026-02-02 02:13:56.736362 | orchestrator | + ip_address = "192.168.16.11" 2026-02-02 02:13:56.736366 | orchestrator | + subnet_id = (known after apply) 2026-02-02 02:13:56.736370 | orchestrator | } 2026-02-02 02:13:56.736374 | orchestrator | } 2026-02-02 02:13:56.736433 | orchestrator | 2026-02-02 02:13:56.736442 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-02 02:13:56.736446 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-02 02:13:56.736450 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 02:13:56.736454 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 02:13:56.736457 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 02:13:56.736461 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.736469 | orchestrator | + device_id = (known after apply) 2026-02-02 02:13:56.736473 | orchestrator | + device_owner = (known after apply) 2026-02-02 02:13:56.736477 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 02:13:56.736494 | orchestrator | + dns_name = (known after apply) 2026-02-02 02:13:56.736501 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.736505 | orchestrator | + mac_address = (known after apply) 2026-02-02 02:13:56.736509 | orchestrator | + network_id = (known after apply) 2026-02-02 02:13:56.736513 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 02:13:56.736517 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 02:13:56.736520 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.736524 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 02:13:56.736528 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.736532 | orchestrator | 2026-02-02 02:13:56.736535 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736539 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-02 02:13:56.736543 | orchestrator | } 2026-02-02 02:13:56.736547 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736550 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 02:13:56.736554 | orchestrator | } 2026-02-02 02:13:56.736558 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736562 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-02 02:13:56.736566 | orchestrator | } 2026-02-02 02:13:56.736569 | orchestrator | 2026-02-02 02:13:56.736573 | orchestrator | + binding (known after apply) 2026-02-02 02:13:56.736577 | orchestrator | 2026-02-02 02:13:56.736581 | orchestrator | + fixed_ip { 2026-02-02 02:13:56.736584 | orchestrator | + ip_address = "192.168.16.12" 2026-02-02 02:13:56.736588 | orchestrator | + subnet_id = (known after apply) 2026-02-02 02:13:56.736592 | orchestrator | } 2026-02-02 02:13:56.736596 | orchestrator | } 2026-02-02 02:13:56.736636 | orchestrator | 2026-02-02 02:13:56.736641 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-02 02:13:56.736645 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-02 02:13:56.736649 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 02:13:56.736653 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 02:13:56.736656 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 02:13:56.736660 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.736664 | orchestrator | + device_id = (known after apply) 2026-02-02 02:13:56.736667 | orchestrator | + device_owner = (known after apply) 2026-02-02 02:13:56.736671 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 02:13:56.736675 | orchestrator | + dns_name = (known after apply) 2026-02-02 02:13:56.736679 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.736682 | orchestrator | + mac_address = (known after apply) 2026-02-02 02:13:56.736686 | orchestrator | + network_id = (known after apply) 2026-02-02 02:13:56.736690 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 02:13:56.736694 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 02:13:56.736697 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.736701 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 02:13:56.736705 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.736709 | orchestrator | 2026-02-02 02:13:56.736712 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736716 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-02 02:13:56.736720 | orchestrator | } 2026-02-02 02:13:56.736724 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736727 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 02:13:56.736731 | orchestrator | } 2026-02-02 02:13:56.736735 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736738 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-02 02:13:56.736742 | orchestrator | } 2026-02-02 02:13:56.736746 | orchestrator | 2026-02-02 02:13:56.736755 | orchestrator | + binding (known after apply) 2026-02-02 02:13:56.736759 | orchestrator | 2026-02-02 02:13:56.736763 | orchestrator | + fixed_ip { 2026-02-02 02:13:56.736767 | orchestrator | + ip_address = "192.168.16.13" 2026-02-02 02:13:56.736771 | orchestrator | + subnet_id = (known after apply) 2026-02-02 02:13:56.736774 | orchestrator | } 2026-02-02 02:13:56.736778 | orchestrator | } 2026-02-02 02:13:56.736828 | orchestrator | 2026-02-02 02:13:56.736836 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-02 02:13:56.736840 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-02 02:13:56.736844 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 02:13:56.736848 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 02:13:56.736852 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 02:13:56.736855 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.736859 | orchestrator | + device_id = (known after apply) 2026-02-02 02:13:56.736863 | orchestrator | + device_owner = (known after apply) 2026-02-02 02:13:56.736867 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 02:13:56.736870 | orchestrator | + dns_name = (known after apply) 2026-02-02 02:13:56.736874 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.736878 | orchestrator | + mac_address = (known after apply) 2026-02-02 02:13:56.736882 | orchestrator | + network_id = (known after apply) 2026-02-02 02:13:56.736885 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 02:13:56.736889 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 02:13:56.736893 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.736897 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 02:13:56.736901 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.736905 | orchestrator | 2026-02-02 02:13:56.736909 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736912 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-02 02:13:56.736916 | orchestrator | } 2026-02-02 02:13:56.736920 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736924 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 02:13:56.736928 | orchestrator | } 2026-02-02 02:13:56.736931 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.736935 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-02 02:13:56.736939 | orchestrator | } 2026-02-02 02:13:56.736943 | orchestrator | 2026-02-02 02:13:56.736946 | orchestrator | + binding (known after apply) 2026-02-02 02:13:56.736950 | orchestrator | 2026-02-02 02:13:56.736954 | orchestrator | + fixed_ip { 2026-02-02 02:13:56.736958 | orchestrator | + ip_address = "192.168.16.14" 2026-02-02 02:13:56.736962 | orchestrator | + subnet_id = (known after apply) 2026-02-02 02:13:56.736965 | orchestrator | } 2026-02-02 02:13:56.736969 | orchestrator | } 2026-02-02 02:13:56.737042 | orchestrator | 2026-02-02 02:13:56.737048 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-02 02:13:56.737052 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-02 02:13:56.737056 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 02:13:56.737059 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-02 02:13:56.737063 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-02 02:13:56.737067 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.737071 | orchestrator | + device_id = (known after apply) 2026-02-02 02:13:56.737074 | orchestrator | + device_owner = (known after apply) 2026-02-02 02:13:56.737078 | orchestrator | + dns_assignment = (known after apply) 2026-02-02 02:13:56.737082 | orchestrator | + dns_name = (known after apply) 2026-02-02 02:13:56.737086 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.737089 | orchestrator | + mac_address = (known after apply) 2026-02-02 02:13:56.737093 | orchestrator | + network_id = (known after apply) 2026-02-02 02:13:56.737097 | orchestrator | + port_security_enabled = (known after apply) 2026-02-02 02:13:56.737101 | orchestrator | + qos_policy_id = (known after apply) 2026-02-02 02:13:56.737121 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.737125 | orchestrator | + security_group_ids = (known after apply) 2026-02-02 02:13:56.737128 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.737132 | orchestrator | 2026-02-02 02:13:56.737136 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.737140 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-02 02:13:56.737144 | orchestrator | } 2026-02-02 02:13:56.737147 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.737151 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-02 02:13:56.737155 | orchestrator | } 2026-02-02 02:13:56.737159 | orchestrator | + allowed_address_pairs { 2026-02-02 02:13:56.737163 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-02 02:13:56.737166 | orchestrator | } 2026-02-02 02:13:56.737170 | orchestrator | 2026-02-02 02:13:56.737177 | orchestrator | + binding (known after apply) 2026-02-02 02:13:56.737181 | orchestrator | 2026-02-02 02:13:56.737185 | orchestrator | + fixed_ip { 2026-02-02 02:13:56.737189 | orchestrator | + ip_address = "192.168.16.15" 2026-02-02 02:13:56.737193 | orchestrator | + subnet_id = (known after apply) 2026-02-02 02:13:56.737197 | orchestrator | } 2026-02-02 02:13:56.737200 | orchestrator | } 2026-02-02 02:13:56.737204 | orchestrator | 2026-02-02 02:13:56.737208 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-02 02:13:56.737211 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-02 02:13:56.737215 | orchestrator | + force_destroy = false 2026-02-02 02:13:56.737219 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.737223 | orchestrator | + port_id = (known after apply) 2026-02-02 02:13:56.737227 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.737230 | orchestrator | + router_id = (known after apply) 2026-02-02 02:13:56.737234 | orchestrator | + subnet_id = (known after apply) 2026-02-02 02:13:56.737238 | orchestrator | } 2026-02-02 02:13:56.737244 | orchestrator | 2026-02-02 02:13:56.737248 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-02 02:13:56.737252 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-02 02:13:56.737256 | orchestrator | + admin_state_up = (known after apply) 2026-02-02 02:13:56.737259 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.737263 | orchestrator | + availability_zone_hints = [ 2026-02-02 02:13:56.737267 | orchestrator | + "nova", 2026-02-02 02:13:56.737271 | orchestrator | ] 2026-02-02 02:13:56.737275 | orchestrator | + distributed = (known after apply) 2026-02-02 02:13:56.737278 | orchestrator | + enable_snat = (known after apply) 2026-02-02 02:13:56.737282 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-02 02:13:56.737286 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-02 02:13:56.737290 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.737293 | orchestrator | + name = "testbed" 2026-02-02 02:13:56.737297 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.737301 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.737305 | orchestrator | 2026-02-02 02:13:56.737308 | orchestrator | + external_fixed_ip (known after apply) 2026-02-02 02:13:56.737312 | orchestrator | } 2026-02-02 02:13:56.737316 | orchestrator | 2026-02-02 02:13:56.737320 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-02 02:13:56.737324 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-02 02:13:56.737328 | orchestrator | + description = "ssh" 2026-02-02 02:13:56.737332 | orchestrator | + direction = "ingress" 2026-02-02 02:13:56.737336 | orchestrator | + ethertype = "IPv4" 2026-02-02 02:13:56.737339 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.737343 | orchestrator | + port_range_max = 22 2026-02-02 02:13:56.737347 | orchestrator | + port_range_min = 22 2026-02-02 02:13:56.737351 | orchestrator | + protocol = "tcp" 2026-02-02 02:13:56.737355 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.737362 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 02:13:56.737366 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 02:13:56.737369 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 02:13:56.737373 | orchestrator | + security_group_id = (known after apply) 2026-02-02 02:13:56.737377 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.737381 | orchestrator | } 2026-02-02 02:13:56.737386 | orchestrator | 2026-02-02 02:13:56.737390 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-02 02:13:56.737394 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-02 02:13:56.737398 | orchestrator | + description = "wireguard" 2026-02-02 02:13:56.737401 | orchestrator | + direction = "ingress" 2026-02-02 02:13:56.737405 | orchestrator | + ethertype = "IPv4" 2026-02-02 02:13:56.737409 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.737413 | orchestrator | + port_range_max = 51820 2026-02-02 02:13:56.737416 | orchestrator | + port_range_min = 51820 2026-02-02 02:13:56.737420 | orchestrator | + protocol = "udp" 2026-02-02 02:13:56.737424 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.737428 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 02:13:56.737431 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 02:13:56.737435 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 02:13:56.737439 | orchestrator | + security_group_id = (known after apply) 2026-02-02 02:13:56.737443 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.737446 | orchestrator | } 2026-02-02 02:13:56.737452 | orchestrator | 2026-02-02 02:13:56.737456 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-02 02:13:56.737460 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-02 02:13:56.737464 | orchestrator | + direction = "ingress" 2026-02-02 02:13:56.737467 | orchestrator | + ethertype = "IPv4" 2026-02-02 02:13:56.737471 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.737475 | orchestrator | + protocol = "tcp" 2026-02-02 02:13:56.737479 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.737497 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 02:13:56.737501 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 02:13:56.737504 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-02 02:13:56.737508 | orchestrator | + security_group_id = (known after apply) 2026-02-02 02:13:56.737512 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.737516 | orchestrator | } 2026-02-02 02:13:56.737618 | orchestrator | 2026-02-02 02:13:56.737677 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-02 02:13:56.737757 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-02 02:13:56.737876 | orchestrator | + direction = "ingress" 2026-02-02 02:13:56.738055 | orchestrator | + ethertype = "IPv4" 2026-02-02 02:13:56.738114 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.738268 | orchestrator | + protocol = "udp" 2026-02-02 02:13:56.738343 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.738578 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 02:13:56.738673 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 02:13:56.738775 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-02 02:13:56.738780 | orchestrator | + security_group_id = (known after apply) 2026-02-02 02:13:56.738784 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.738788 | orchestrator | } 2026-02-02 02:13:56.738795 | orchestrator | 2026-02-02 02:13:56.738799 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-02 02:13:56.738808 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-02 02:13:56.738812 | orchestrator | + direction = "ingress" 2026-02-02 02:13:56.738816 | orchestrator | + ethertype = "IPv4" 2026-02-02 02:13:56.738819 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.738823 | orchestrator | + protocol = "icmp" 2026-02-02 02:13:56.738827 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.738831 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 02:13:56.738835 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 02:13:56.738839 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 02:13:56.738842 | orchestrator | + security_group_id = (known after apply) 2026-02-02 02:13:56.738846 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.738850 | orchestrator | } 2026-02-02 02:13:56.738854 | orchestrator | 2026-02-02 02:13:56.738858 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-02 02:13:56.738861 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-02 02:13:56.738865 | orchestrator | + direction = "ingress" 2026-02-02 02:13:56.738869 | orchestrator | + ethertype = "IPv4" 2026-02-02 02:13:56.738873 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.738877 | orchestrator | + protocol = "tcp" 2026-02-02 02:13:56.738881 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.738884 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 02:13:56.738891 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 02:13:56.738896 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 02:13:56.738900 | orchestrator | + security_group_id = (known after apply) 2026-02-02 02:13:56.738903 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.738907 | orchestrator | } 2026-02-02 02:13:56.738911 | orchestrator | 2026-02-02 02:13:56.738915 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-02 02:13:56.738919 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-02 02:13:56.738922 | orchestrator | + direction = "ingress" 2026-02-02 02:13:56.738926 | orchestrator | + ethertype = "IPv4" 2026-02-02 02:13:56.738930 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.738934 | orchestrator | + protocol = "udp" 2026-02-02 02:13:56.738938 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.738941 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 02:13:56.738945 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 02:13:56.738949 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 02:13:56.738953 | orchestrator | + security_group_id = (known after apply) 2026-02-02 02:13:56.738956 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.738960 | orchestrator | } 2026-02-02 02:13:56.738964 | orchestrator | 2026-02-02 02:13:56.738968 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-02 02:13:56.738972 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-02 02:13:56.738975 | orchestrator | + direction = "ingress" 2026-02-02 02:13:56.738982 | orchestrator | + ethertype = "IPv4" 2026-02-02 02:13:56.738986 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.738990 | orchestrator | + protocol = "icmp" 2026-02-02 02:13:56.738994 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.738998 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 02:13:56.739001 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 02:13:56.739005 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 02:13:56.739009 | orchestrator | + security_group_id = (known after apply) 2026-02-02 02:13:56.739013 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.739020 | orchestrator | } 2026-02-02 02:13:56.739023 | orchestrator | 2026-02-02 02:13:56.739027 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-02 02:13:56.739031 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-02 02:13:56.739035 | orchestrator | + description = "vrrp" 2026-02-02 02:13:56.739039 | orchestrator | + direction = "ingress" 2026-02-02 02:13:56.739042 | orchestrator | + ethertype = "IPv4" 2026-02-02 02:13:56.739046 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.739050 | orchestrator | + protocol = "112" 2026-02-02 02:13:56.739054 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.739058 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-02 02:13:56.739061 | orchestrator | + remote_group_id = (known after apply) 2026-02-02 02:13:56.739065 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-02 02:13:56.739069 | orchestrator | + security_group_id = (known after apply) 2026-02-02 02:13:56.739073 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.739077 | orchestrator | } 2026-02-02 02:13:56.739080 | orchestrator | 2026-02-02 02:13:56.739084 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-02 02:13:56.739088 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-02 02:13:56.739092 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.739095 | orchestrator | + description = "management security group" 2026-02-02 02:13:56.739099 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.739103 | orchestrator | + name = "testbed-management" 2026-02-02 02:13:56.739107 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.739110 | orchestrator | + stateful = (known after apply) 2026-02-02 02:13:56.739114 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.739118 | orchestrator | } 2026-02-02 02:13:56.739122 | orchestrator | 2026-02-02 02:13:56.739125 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-02 02:13:56.739129 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-02 02:13:56.739133 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.739142 | orchestrator | + description = "node security group" 2026-02-02 02:13:56.739146 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.739150 | orchestrator | + name = "testbed-node" 2026-02-02 02:13:56.739154 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.739157 | orchestrator | + stateful = (known after apply) 2026-02-02 02:13:56.739161 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.739165 | orchestrator | } 2026-02-02 02:13:56.739169 | orchestrator | 2026-02-02 02:13:56.739173 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-02 02:13:56.739176 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-02 02:13:56.739180 | orchestrator | + all_tags = (known after apply) 2026-02-02 02:13:56.739184 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-02 02:13:56.739188 | orchestrator | + dns_nameservers = [ 2026-02-02 02:13:56.739192 | orchestrator | + "8.8.8.8", 2026-02-02 02:13:56.739196 | orchestrator | + "9.9.9.9", 2026-02-02 02:13:56.739200 | orchestrator | ] 2026-02-02 02:13:56.739204 | orchestrator | + enable_dhcp = true 2026-02-02 02:13:56.739207 | orchestrator | + gateway_ip = (known after apply) 2026-02-02 02:13:56.739211 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.739215 | orchestrator | + ip_version = 4 2026-02-02 02:13:56.739219 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-02 02:13:56.739223 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-02 02:13:56.739226 | orchestrator | + name = "subnet-testbed-management" 2026-02-02 02:13:56.739230 | orchestrator | + network_id = (known after apply) 2026-02-02 02:13:56.739234 | orchestrator | + no_gateway = false 2026-02-02 02:13:56.739238 | orchestrator | + region = (known after apply) 2026-02-02 02:13:56.739242 | orchestrator | + service_types = (known after apply) 2026-02-02 02:13:56.739261 | orchestrator | + tenant_id = (known after apply) 2026-02-02 02:13:56.739265 | orchestrator | 2026-02-02 02:13:56.739269 | orchestrator | + allocation_pool { 2026-02-02 02:13:56.739273 | orchestrator | + end = "192.168.31.250" 2026-02-02 02:13:56.739276 | orchestrator | + start = "192.168.31.200" 2026-02-02 02:13:56.739280 | orchestrator | } 2026-02-02 02:13:56.739284 | orchestrator | } 2026-02-02 02:13:56.739288 | orchestrator | 2026-02-02 02:13:56.739291 | orchestrator | # terraform_data.image will be created 2026-02-02 02:13:56.739295 | orchestrator | + resource "terraform_data" "image" { 2026-02-02 02:13:56.739299 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.739302 | orchestrator | + input = "Ubuntu 24.04" 2026-02-02 02:13:56.739306 | orchestrator | + output = (known after apply) 2026-02-02 02:13:56.739310 | orchestrator | } 2026-02-02 02:13:56.739314 | orchestrator | 2026-02-02 02:13:56.739317 | orchestrator | # terraform_data.image_node will be created 2026-02-02 02:13:56.739321 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-02 02:13:56.739325 | orchestrator | + id = (known after apply) 2026-02-02 02:13:56.739329 | orchestrator | + input = "Ubuntu 24.04" 2026-02-02 02:13:56.739332 | orchestrator | + output = (known after apply) 2026-02-02 02:13:56.739336 | orchestrator | } 2026-02-02 02:13:56.739340 | orchestrator | 2026-02-02 02:13:56.739343 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-02 02:13:56.739347 | orchestrator | 2026-02-02 02:13:56.739351 | orchestrator | Changes to Outputs: 2026-02-02 02:13:56.739355 | orchestrator | + manager_address = (sensitive value) 2026-02-02 02:13:56.739359 | orchestrator | + private_key = (sensitive value) 2026-02-02 02:13:56.975598 | orchestrator | terraform_data.image: Creating... 2026-02-02 02:13:56.976131 | orchestrator | terraform_data.image: Creation complete after 0s [id=23aa915b-9c75-d0ea-98e2-bb19000404e0] 2026-02-02 02:13:56.977104 | orchestrator | terraform_data.image_node: Creating... 2026-02-02 02:13:56.981397 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=0d0dc50d-4eb8-999c-30d9-9f604098ad13] 2026-02-02 02:13:56.989597 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-02 02:13:56.991917 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-02 02:13:56.999000 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-02 02:13:57.003946 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-02 02:13:57.006822 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-02 02:13:57.016904 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-02 02:13:57.018944 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-02 02:13:57.019862 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-02 02:13:57.021283 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-02 02:13:57.024961 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-02 02:13:57.487631 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-02 02:13:57.493589 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-02 02:13:57.507896 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-02-02 02:13:57.519303 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-02 02:13:57.780927 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-02 02:13:57.789642 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-02 02:13:57.987055 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=87182dfc-1d8e-4f95-a771-e46761ba77e9] 2026-02-02 02:13:57.997477 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-02 02:14:00.631636 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=c15f901f-7629-41e5-bfd5-e721d3f198c6] 2026-02-02 02:14:00.648432 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=5578c4aa-4507-4a80-9665-78072b9f11f4] 2026-02-02 02:14:00.652338 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=bc39994b-92aa-40f2-807e-6457f6f8ea40] 2026-02-02 02:14:00.654845 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-02 02:14:00.656774 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-02 02:14:00.661359 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=076229ff-17a9-47be-973d-14b64a36a012] 2026-02-02 02:14:00.662669 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-02 02:14:00.666060 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-02 02:14:00.668847 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=20c3425f17da0fcbaccfd02cafd50b31ba399598] 2026-02-02 02:14:00.669327 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=f3337773bb7293b8895b9e4f74cb569c8797d7fd] 2026-02-02 02:14:00.676922 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=2d3e981f-8554-4288-941a-275f46913f28] 2026-02-02 02:14:00.677789 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-02 02:14:00.681323 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-02 02:14:00.682914 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-02 02:14:00.696547 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=e969e129-18ea-460f-85bc-8dfb49c82359] 2026-02-02 02:14:00.703726 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-02 02:14:00.736271 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=9dac4244-a4bc-44f9-ad81-53a595dd15e5] 2026-02-02 02:14:00.745237 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-02 02:14:00.750983 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=1f26c814-af40-4046-ac8d-013998d956cc] 2026-02-02 02:14:00.987904 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=10248bd5-0286-487e-81b0-791c797cb21b] 2026-02-02 02:14:01.348052 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=a2a7e3dd-293e-4828-91d3-e84de9ff6d73] 2026-02-02 02:14:02.072067 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=ad2229c1-bdc0-4413-9fd1-b374dce5b72b] 2026-02-02 02:14:02.081951 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-02 02:14:04.048857 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=2944b273-4436-4bbb-8e69-1106f32efe58] 2026-02-02 02:14:04.063424 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=0dc97797-18b0-45ea-a436-4e6412a95502] 2026-02-02 02:14:04.091717 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=6d8209b1-65e9-4122-ac58-4b8b748af111] 2026-02-02 02:14:04.123832 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=91f9e36e-a0b2-48b8-b319-344a0ffd6bbe] 2026-02-02 02:14:04.143978 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=212ed843-6edd-4565-8465-188b3268426b] 2026-02-02 02:14:04.564299 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=a2343887-7bc1-4466-877e-c2a88f331c7f] 2026-02-02 02:14:04.645373 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=565caa81-dbe1-4003-8de2-aa57358fad56] 2026-02-02 02:14:04.652369 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-02 02:14:04.652842 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-02 02:14:04.653669 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-02 02:14:04.830372 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=9c8830c3-8fda-4e1a-baca-15df324566ab] 2026-02-02 02:14:04.850880 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-02 02:14:04.851502 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-02 02:14:04.851844 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-02 02:14:04.855781 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-02 02:14:04.858708 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-02 02:14:04.858763 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-02 02:14:04.859890 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-02 02:14:04.860579 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-02 02:14:04.872167 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=386b7eee-b376-414e-b2f8-14a4e28dabad] 2026-02-02 02:14:04.881877 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-02 02:14:05.034551 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=f65811af-6d01-4516-8b67-4eae25a5bad1] 2026-02-02 02:14:05.048094 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-02 02:14:05.241752 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=d6c85a0e-d861-4bdf-a971-adfcf03d505f] 2026-02-02 02:14:05.249071 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-02 02:14:05.427866 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=5486aab9-edbd-4547-b24c-d2603e1380ae] 2026-02-02 02:14:05.432694 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-02 02:14:05.445786 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=3637a39c-dfa7-4689-8197-25e5882a33c9] 2026-02-02 02:14:05.448962 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=bad893b5-5345-49a4-918f-067ceb5436a2] 2026-02-02 02:14:05.451023 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-02 02:14:05.453600 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-02 02:14:05.579136 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=f9b49fe4-bdf4-4fcc-9b77-0cd94cce9249] 2026-02-02 02:14:05.586592 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-02 02:14:05.702772 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=c748f04f-28d6-4222-9340-672191b4eeb5] 2026-02-02 02:14:05.709802 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-02 02:14:05.719761 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=e4b62ab2-ef3a-47ba-91c2-55cc951da39f] 2026-02-02 02:14:05.749729 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=8de788e6-91b6-4d30-9f1c-fce2587181c0] 2026-02-02 02:14:05.824601 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=36fcdf14-c1e0-41ab-9856-923687925a26] 2026-02-02 02:14:05.842005 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=8ad960a9-0988-4001-99b7-f8058562e4cf] 2026-02-02 02:14:06.231716 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=90e1df68-daa1-416f-a52e-7bbd5424855d] 2026-02-02 02:14:06.367214 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=689f32f7-f783-4bfb-8754-d01642081cd0] 2026-02-02 02:14:06.551200 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=7dade044-5ad9-45a6-a3cd-06103569d02c] 2026-02-02 02:14:06.702409 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=5aeef79d-7fd6-4157-bfbb-0f30d76e646a] 2026-02-02 02:14:06.819926 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=58f001cf-417c-4d9d-b81c-f89387629f03] 2026-02-02 02:14:06.840576 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-02 02:14:06.854151 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-02 02:14:06.855168 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-02 02:14:06.861967 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-02 02:14:06.862874 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-02 02:14:06.865761 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-02 02:14:06.876588 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-02 02:14:06.878921 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=b0e9478c-06dc-429c-8b2b-32e5a41d5205] 2026-02-02 02:14:08.248518 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=4d43f5cb-10ab-4e9f-9e32-e35e4ee2f8fc] 2026-02-02 02:14:08.259115 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-02 02:14:08.265998 | orchestrator | local_file.inventory: Creating... 2026-02-02 02:14:08.267883 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-02 02:14:08.272254 | orchestrator | local_file.inventory: Creation complete after 0s [id=55b9b6c544e95976d3881be61d84de1045ab8a32] 2026-02-02 02:14:08.272877 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=cce2190d94b305ac491d340b89076bc00ce11333] 2026-02-02 02:14:09.642639 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=4d43f5cb-10ab-4e9f-9e32-e35e4ee2f8fc] 2026-02-02 02:14:16.855886 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-02 02:14:16.859632 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-02 02:14:16.873018 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-02 02:14:16.873114 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-02 02:14:16.879592 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-02 02:14:16.879715 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-02 02:14:26.856162 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-02 02:14:26.860436 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-02 02:14:26.873809 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-02 02:14:26.873961 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-02 02:14:26.880689 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-02 02:14:26.880810 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-02 02:14:27.239081 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=bdaa8dc9-14a1-446c-acde-aa026b2df496] 2026-02-02 02:14:27.413205 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=3755fea1-c95b-4707-a52e-c8fadee0ae24] 2026-02-02 02:14:27.435108 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=853fb82d-3fda-46a1-a945-ba70a426f277] 2026-02-02 02:14:27.440554 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=66583c21-a87a-4d5b-b85c-d41078f62d0c] 2026-02-02 02:14:36.882666 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-02 02:14:36.882782 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-02 02:14:37.578846 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=9e2d2543-8dfc-43e6-9710-1d1d3a0cd07e] 2026-02-02 02:14:37.685424 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=58b53976-66a9-4160-91bf-aac1c23a06d4] 2026-02-02 02:14:37.699658 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-02 02:14:37.708066 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=7953885801370931063] 2026-02-02 02:14:37.711622 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-02 02:14:37.711778 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-02 02:14:37.711950 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-02 02:14:37.720366 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-02 02:14:37.722314 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-02 02:14:37.723583 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-02 02:14:37.739653 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-02 02:14:37.740158 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-02 02:14:37.746450 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-02 02:14:37.759006 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-02 02:14:41.090281 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=66583c21-a87a-4d5b-b85c-d41078f62d0c/e969e129-18ea-460f-85bc-8dfb49c82359] 2026-02-02 02:14:41.099771 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=3755fea1-c95b-4707-a52e-c8fadee0ae24/c15f901f-7629-41e5-bfd5-e721d3f198c6] 2026-02-02 02:14:41.118695 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=9e2d2543-8dfc-43e6-9710-1d1d3a0cd07e/076229ff-17a9-47be-973d-14b64a36a012] 2026-02-02 02:14:41.133548 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=66583c21-a87a-4d5b-b85c-d41078f62d0c/10248bd5-0286-487e-81b0-791c797cb21b] 2026-02-02 02:14:41.148182 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=3755fea1-c95b-4707-a52e-c8fadee0ae24/1f26c814-af40-4046-ac8d-013998d956cc] 2026-02-02 02:14:41.171769 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=9e2d2543-8dfc-43e6-9710-1d1d3a0cd07e/2d3e981f-8554-4288-941a-275f46913f28] 2026-02-02 02:14:47.241314 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=66583c21-a87a-4d5b-b85c-d41078f62d0c/bc39994b-92aa-40f2-807e-6457f6f8ea40] 2026-02-02 02:14:47.263869 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=9e2d2543-8dfc-43e6-9710-1d1d3a0cd07e/9dac4244-a4bc-44f9-ad81-53a595dd15e5] 2026-02-02 02:14:47.314886 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=3755fea1-c95b-4707-a52e-c8fadee0ae24/5578c4aa-4507-4a80-9665-78072b9f11f4] 2026-02-02 02:14:47.744783 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-02 02:14:57.745303 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-02 02:14:58.045204 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=f4f6bbb7-e4bd-4ccc-a1d5-0b11aad06ed6] 2026-02-02 02:14:58.060914 | orchestrator | 2026-02-02 02:14:58.060985 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-02 02:14:58.060993 | orchestrator | 2026-02-02 02:14:58.060998 | orchestrator | Outputs: 2026-02-02 02:14:58.061003 | orchestrator | 2026-02-02 02:14:58.061014 | orchestrator | manager_address = 2026-02-02 02:14:58.061020 | orchestrator | private_key = 2026-02-02 02:14:58.553800 | orchestrator | ok: Runtime: 0:01:07.151204 2026-02-02 02:14:58.584050 | 2026-02-02 02:14:58.584169 | TASK [Fetch manager address] 2026-02-02 02:14:59.034437 | orchestrator | ok 2026-02-02 02:14:59.047451 | 2026-02-02 02:14:59.047680 | TASK [Set manager_host address] 2026-02-02 02:14:59.121481 | orchestrator | ok 2026-02-02 02:14:59.128895 | 2026-02-02 02:14:59.129023 | LOOP [Update ansible collections] 2026-02-02 02:15:03.389981 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-02 02:15:03.390333 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-02 02:15:03.390388 | orchestrator | Starting galaxy collection install process 2026-02-02 02:15:03.390425 | orchestrator | Process install dependency map 2026-02-02 02:15:03.390458 | orchestrator | Starting collection install process 2026-02-02 02:15:03.390487 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-02 02:15:03.390522 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-02 02:15:03.390558 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-02 02:15:03.390648 | orchestrator | ok: Item: commons Runtime: 0:00:03.918386 2026-02-02 02:15:05.384178 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-02 02:15:05.384353 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-02 02:15:05.384408 | orchestrator | Starting galaxy collection install process 2026-02-02 02:15:05.384448 | orchestrator | Process install dependency map 2026-02-02 02:15:05.384486 | orchestrator | Starting collection install process 2026-02-02 02:15:05.384522 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-02 02:15:05.384557 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-02 02:15:05.384591 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-02 02:15:05.384665 | orchestrator | ok: Item: services Runtime: 0:00:01.651583 2026-02-02 02:15:05.405520 | 2026-02-02 02:15:05.405708 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-02 02:15:15.982889 | orchestrator | ok 2026-02-02 02:15:15.994510 | 2026-02-02 02:15:15.994665 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-02 02:16:16.049740 | orchestrator | ok 2026-02-02 02:16:16.061020 | 2026-02-02 02:16:16.061144 | TASK [Fetch manager ssh hostkey] 2026-02-02 02:16:17.639639 | orchestrator | Output suppressed because no_log was given 2026-02-02 02:16:17.654139 | 2026-02-02 02:16:17.654314 | TASK [Get ssh keypair from terraform environment] 2026-02-02 02:16:18.190262 | orchestrator | ok: Runtime: 0:00:00.010138 2026-02-02 02:16:18.204877 | 2026-02-02 02:16:18.205018 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-02 02:16:18.241389 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-02 02:16:18.251445 | 2026-02-02 02:16:18.251570 | TASK [Run manager part 0] 2026-02-02 02:16:20.062694 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-02 02:16:20.320252 | orchestrator | 2026-02-02 02:16:20.320324 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-02 02:16:20.320336 | orchestrator | 2026-02-02 02:16:20.320361 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-02 02:16:22.347106 | orchestrator | ok: [testbed-manager] 2026-02-02 02:16:22.347209 | orchestrator | 2026-02-02 02:16:22.347265 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-02 02:16:22.347290 | orchestrator | 2026-02-02 02:16:22.347314 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 02:16:24.326515 | orchestrator | ok: [testbed-manager] 2026-02-02 02:16:24.326564 | orchestrator | 2026-02-02 02:16:24.326571 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-02 02:16:25.141264 | orchestrator | ok: [testbed-manager] 2026-02-02 02:16:25.141305 | orchestrator | 2026-02-02 02:16:25.141314 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-02 02:16:25.192091 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:16:25.192130 | orchestrator | 2026-02-02 02:16:25.192140 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-02 02:16:25.217860 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:16:25.217898 | orchestrator | 2026-02-02 02:16:25.217905 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-02 02:16:25.252003 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:16:25.252065 | orchestrator | 2026-02-02 02:16:25.252079 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-02 02:16:25.281167 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:16:25.281226 | orchestrator | 2026-02-02 02:16:25.281237 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-02 02:16:25.315113 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:16:25.315168 | orchestrator | 2026-02-02 02:16:25.315179 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-02 02:16:25.346835 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:16:25.346886 | orchestrator | 2026-02-02 02:16:25.346898 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-02 02:16:25.383272 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:16:25.383327 | orchestrator | 2026-02-02 02:16:25.383337 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-02 02:16:26.152056 | orchestrator | changed: [testbed-manager] 2026-02-02 02:16:26.152096 | orchestrator | 2026-02-02 02:16:26.152104 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-02 02:19:14.515883 | orchestrator | changed: [testbed-manager] 2026-02-02 02:19:14.515947 | orchestrator | 2026-02-02 02:19:14.515961 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-02 02:20:32.644739 | orchestrator | changed: [testbed-manager] 2026-02-02 02:20:32.644858 | orchestrator | 2026-02-02 02:20:32.644882 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-02 02:20:54.098053 | orchestrator | changed: [testbed-manager] 2026-02-02 02:20:54.098142 | orchestrator | 2026-02-02 02:20:54.098156 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-02 02:21:03.769694 | orchestrator | changed: [testbed-manager] 2026-02-02 02:21:03.769803 | orchestrator | 2026-02-02 02:21:03.769821 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-02 02:21:03.826332 | orchestrator | ok: [testbed-manager] 2026-02-02 02:21:03.826453 | orchestrator | 2026-02-02 02:21:03.826481 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-02 02:21:04.674070 | orchestrator | ok: [testbed-manager] 2026-02-02 02:21:04.674133 | orchestrator | 2026-02-02 02:21:04.674150 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-02 02:21:05.434141 | orchestrator | changed: [testbed-manager] 2026-02-02 02:21:05.434314 | orchestrator | 2026-02-02 02:21:05.434326 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-02 02:21:12.149852 | orchestrator | changed: [testbed-manager] 2026-02-02 02:21:12.149915 | orchestrator | 2026-02-02 02:21:12.149945 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-02 02:21:18.430668 | orchestrator | changed: [testbed-manager] 2026-02-02 02:21:18.430739 | orchestrator | 2026-02-02 02:21:18.430750 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-02 02:21:21.335010 | orchestrator | changed: [testbed-manager] 2026-02-02 02:21:21.335132 | orchestrator | 2026-02-02 02:21:21.335159 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-02 02:21:23.203383 | orchestrator | changed: [testbed-manager] 2026-02-02 02:21:23.203456 | orchestrator | 2026-02-02 02:21:23.203466 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-02 02:21:24.399383 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-02 02:21:24.399480 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-02 02:21:24.399530 | orchestrator | 2026-02-02 02:21:24.399546 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-02 02:21:24.436250 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-02 02:21:24.436317 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-02 02:21:24.436324 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-02 02:21:24.436331 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-02 02:21:33.858736 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-02 02:21:33.858837 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-02 02:21:33.858854 | orchestrator | 2026-02-02 02:21:33.858867 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-02 02:21:34.461884 | orchestrator | changed: [testbed-manager] 2026-02-02 02:21:34.462080 | orchestrator | 2026-02-02 02:21:34.462107 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-02 02:23:54.174291 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-02 02:23:54.174405 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-02 02:23:54.174425 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-02 02:23:54.174438 | orchestrator | 2026-02-02 02:23:54.174450 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-02 02:23:56.714508 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-02 02:23:56.714587 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-02 02:23:56.714599 | orchestrator | 2026-02-02 02:23:56.714609 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-02 02:23:56.714619 | orchestrator | 2026-02-02 02:23:56.714628 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 02:23:58.093625 | orchestrator | ok: [testbed-manager] 2026-02-02 02:23:58.093710 | orchestrator | 2026-02-02 02:23:58.093724 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-02 02:23:58.141270 | orchestrator | ok: [testbed-manager] 2026-02-02 02:23:58.141370 | orchestrator | 2026-02-02 02:23:58.141385 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-02 02:23:58.214109 | orchestrator | ok: [testbed-manager] 2026-02-02 02:23:58.214201 | orchestrator | 2026-02-02 02:23:58.214216 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-02 02:23:59.046517 | orchestrator | changed: [testbed-manager] 2026-02-02 02:23:59.046589 | orchestrator | 2026-02-02 02:23:59.046597 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-02 02:23:59.800813 | orchestrator | changed: [testbed-manager] 2026-02-02 02:23:59.800887 | orchestrator | 2026-02-02 02:23:59.800898 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-02 02:24:01.330893 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-02 02:24:01.330962 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-02 02:24:01.330970 | orchestrator | 2026-02-02 02:24:01.330989 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-02 02:24:02.789823 | orchestrator | changed: [testbed-manager] 2026-02-02 02:24:02.789888 | orchestrator | 2026-02-02 02:24:02.789896 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-02 02:24:04.575824 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 02:24:04.575940 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-02 02:24:04.575983 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-02 02:24:04.576007 | orchestrator | 2026-02-02 02:24:04.576021 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-02 02:24:04.627631 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:24:04.627753 | orchestrator | 2026-02-02 02:24:04.627772 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-02 02:24:04.699665 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:24:04.699913 | orchestrator | 2026-02-02 02:24:04.699951 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-02 02:24:05.263835 | orchestrator | changed: [testbed-manager] 2026-02-02 02:24:05.263914 | orchestrator | 2026-02-02 02:24:05.263927 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-02 02:24:05.333459 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:24:05.333522 | orchestrator | 2026-02-02 02:24:05.333531 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-02 02:24:06.248540 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-02 02:24:06.248620 | orchestrator | changed: [testbed-manager] 2026-02-02 02:24:06.248647 | orchestrator | 2026-02-02 02:24:06.248669 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-02 02:24:06.280998 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:24:06.281095 | orchestrator | 2026-02-02 02:24:06.281112 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-02 02:24:06.308748 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:24:06.308841 | orchestrator | 2026-02-02 02:24:06.308857 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-02 02:24:06.350223 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:24:06.350321 | orchestrator | 2026-02-02 02:24:06.350340 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-02 02:24:06.433904 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:24:06.433998 | orchestrator | 2026-02-02 02:24:06.434047 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-02 02:24:07.207383 | orchestrator | ok: [testbed-manager] 2026-02-02 02:24:07.207452 | orchestrator | 2026-02-02 02:24:07.207463 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-02 02:24:07.207474 | orchestrator | 2026-02-02 02:24:07.207479 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 02:24:08.735355 | orchestrator | ok: [testbed-manager] 2026-02-02 02:24:08.735445 | orchestrator | 2026-02-02 02:24:08.735463 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-02 02:24:09.693942 | orchestrator | changed: [testbed-manager] 2026-02-02 02:24:09.694070 | orchestrator | 2026-02-02 02:24:09.694092 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:24:09.694107 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-02 02:24:09.694119 | orchestrator | 2026-02-02 02:24:10.086228 | orchestrator | ok: Runtime: 0:07:51.269805 2026-02-02 02:24:10.103762 | 2026-02-02 02:24:10.103908 | TASK [Point out that the log in on the manager is now possible] 2026-02-02 02:24:10.152294 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-02 02:24:10.162493 | 2026-02-02 02:24:10.162644 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-02 02:24:10.203889 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-02 02:24:10.214551 | 2026-02-02 02:24:10.214745 | TASK [Run manager part 1 + 2] 2026-02-02 02:24:11.127443 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-02 02:24:11.190581 | orchestrator | 2026-02-02 02:24:11.190623 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-02 02:24:11.190630 | orchestrator | 2026-02-02 02:24:11.190642 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 02:24:14.188260 | orchestrator | ok: [testbed-manager] 2026-02-02 02:24:14.188311 | orchestrator | 2026-02-02 02:24:14.188333 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-02 02:24:14.230786 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:24:14.230847 | orchestrator | 2026-02-02 02:24:14.230862 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-02 02:24:14.276203 | orchestrator | ok: [testbed-manager] 2026-02-02 02:24:14.276258 | orchestrator | 2026-02-02 02:24:14.276267 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-02 02:24:14.314839 | orchestrator | ok: [testbed-manager] 2026-02-02 02:24:14.314889 | orchestrator | 2026-02-02 02:24:14.314898 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-02 02:24:14.398384 | orchestrator | ok: [testbed-manager] 2026-02-02 02:24:14.398456 | orchestrator | 2026-02-02 02:24:14.398472 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-02 02:24:14.463676 | orchestrator | ok: [testbed-manager] 2026-02-02 02:24:14.463722 | orchestrator | 2026-02-02 02:24:14.463796 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-02 02:24:14.521099 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-02 02:24:14.521147 | orchestrator | 2026-02-02 02:24:14.521155 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-02 02:24:15.290520 | orchestrator | ok: [testbed-manager] 2026-02-02 02:24:15.290577 | orchestrator | 2026-02-02 02:24:15.290588 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-02 02:24:15.328466 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:24:15.328525 | orchestrator | 2026-02-02 02:24:15.328534 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-02 02:24:17.098232 | orchestrator | changed: [testbed-manager] 2026-02-02 02:24:17.098287 | orchestrator | 2026-02-02 02:24:17.098297 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-02 02:24:17.689020 | orchestrator | ok: [testbed-manager] 2026-02-02 02:24:17.689069 | orchestrator | 2026-02-02 02:24:17.689078 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-02 02:24:18.907666 | orchestrator | changed: [testbed-manager] 2026-02-02 02:24:18.907772 | orchestrator | 2026-02-02 02:24:18.907790 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-02 02:24:35.004064 | orchestrator | changed: [testbed-manager] 2026-02-02 02:24:35.004135 | orchestrator | 2026-02-02 02:24:35.004144 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-02 02:24:35.684814 | orchestrator | ok: [testbed-manager] 2026-02-02 02:24:35.684880 | orchestrator | 2026-02-02 02:24:35.684896 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-02 02:24:35.738116 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:24:35.738234 | orchestrator | 2026-02-02 02:24:35.738249 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-02 02:24:36.747315 | orchestrator | changed: [testbed-manager] 2026-02-02 02:24:36.747438 | orchestrator | 2026-02-02 02:24:36.747455 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-02 02:24:37.739884 | orchestrator | changed: [testbed-manager] 2026-02-02 02:24:37.740000 | orchestrator | 2026-02-02 02:24:37.740022 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-02 02:24:38.339848 | orchestrator | changed: [testbed-manager] 2026-02-02 02:24:38.339903 | orchestrator | 2026-02-02 02:24:38.339913 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-02 02:24:38.374126 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-02 02:24:38.374255 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-02 02:24:38.374281 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-02 02:24:38.374302 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-02 02:24:42.473125 | orchestrator | changed: [testbed-manager] 2026-02-02 02:24:42.473189 | orchestrator | 2026-02-02 02:24:42.473202 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-02 02:24:52.134680 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-02 02:24:52.134875 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-02 02:24:52.134891 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-02 02:24:52.134900 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-02 02:24:52.134914 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-02 02:24:52.134922 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-02 02:24:52.134930 | orchestrator | 2026-02-02 02:24:52.134938 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-02 02:24:53.203247 | orchestrator | changed: [testbed-manager] 2026-02-02 02:24:53.203355 | orchestrator | 2026-02-02 02:24:53.203372 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-02 02:24:53.242993 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:24:53.243029 | orchestrator | 2026-02-02 02:24:53.243034 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-02 02:24:56.274058 | orchestrator | changed: [testbed-manager] 2026-02-02 02:24:56.274104 | orchestrator | 2026-02-02 02:24:56.274113 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-02 02:24:56.312355 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:24:56.312395 | orchestrator | 2026-02-02 02:24:56.312402 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-02 02:26:44.403818 | orchestrator | changed: [testbed-manager] 2026-02-02 02:26:44.403919 | orchestrator | 2026-02-02 02:26:44.403937 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-02 02:26:45.658310 | orchestrator | ok: [testbed-manager] 2026-02-02 02:26:45.658350 | orchestrator | 2026-02-02 02:26:45.658358 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:26:45.658366 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-02 02:26:45.658371 | orchestrator | 2026-02-02 02:26:45.856692 | orchestrator | ok: Runtime: 0:02:35.245539 2026-02-02 02:26:45.872861 | 2026-02-02 02:26:45.873005 | TASK [Reboot manager] 2026-02-02 02:26:47.409166 | orchestrator | ok: Runtime: 0:00:00.996544 2026-02-02 02:26:47.425443 | 2026-02-02 02:26:47.425656 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-02 02:27:03.845966 | orchestrator | ok 2026-02-02 02:27:03.856655 | 2026-02-02 02:27:03.856802 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-02 02:28:03.901203 | orchestrator | ok 2026-02-02 02:28:03.909027 | 2026-02-02 02:28:03.909143 | TASK [Deploy manager + bootstrap nodes] 2026-02-02 02:28:06.797845 | orchestrator | 2026-02-02 02:28:06.798193 | orchestrator | # DEPLOY MANAGER 2026-02-02 02:28:06.798218 | orchestrator | 2026-02-02 02:28:06.798229 | orchestrator | + set -e 2026-02-02 02:28:06.798240 | orchestrator | + echo 2026-02-02 02:28:06.798252 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-02 02:28:06.798268 | orchestrator | + echo 2026-02-02 02:28:06.798314 | orchestrator | + cat /opt/manager-vars.sh 2026-02-02 02:28:06.801353 | orchestrator | export NUMBER_OF_NODES=6 2026-02-02 02:28:06.801409 | orchestrator | 2026-02-02 02:28:06.801421 | orchestrator | export CEPH_VERSION=reef 2026-02-02 02:28:06.801433 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-02 02:28:06.801444 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-02 02:28:06.801467 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-02 02:28:06.801476 | orchestrator | 2026-02-02 02:28:06.801492 | orchestrator | export ARA=false 2026-02-02 02:28:06.801502 | orchestrator | export DEPLOY_MODE=manager 2026-02-02 02:28:06.801517 | orchestrator | export TEMPEST=false 2026-02-02 02:28:06.801527 | orchestrator | export IS_ZUUL=true 2026-02-02 02:28:06.801537 | orchestrator | 2026-02-02 02:28:06.801553 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 02:28:06.801563 | orchestrator | export EXTERNAL_API=false 2026-02-02 02:28:06.801572 | orchestrator | 2026-02-02 02:28:06.801581 | orchestrator | export IMAGE_USER=ubuntu 2026-02-02 02:28:06.801596 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-02 02:28:06.801605 | orchestrator | 2026-02-02 02:28:06.801615 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-02 02:28:06.801634 | orchestrator | 2026-02-02 02:28:06.801644 | orchestrator | + echo 2026-02-02 02:28:06.801655 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 02:28:06.803100 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 02:28:06.803125 | orchestrator | ++ INTERACTIVE=false 2026-02-02 02:28:06.803135 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 02:28:06.803144 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 02:28:06.803153 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 02:28:06.803161 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 02:28:06.803170 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 02:28:06.803285 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 02:28:06.803299 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 02:28:06.803308 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 02:28:06.803317 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 02:28:06.803326 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 02:28:06.803334 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 02:28:06.803343 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-02 02:28:06.803371 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-02 02:28:06.803385 | orchestrator | ++ export ARA=false 2026-02-02 02:28:06.803394 | orchestrator | ++ ARA=false 2026-02-02 02:28:06.803402 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 02:28:06.803411 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 02:28:06.803423 | orchestrator | ++ export TEMPEST=false 2026-02-02 02:28:06.803432 | orchestrator | ++ TEMPEST=false 2026-02-02 02:28:06.803441 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 02:28:06.803449 | orchestrator | ++ IS_ZUUL=true 2026-02-02 02:28:06.803501 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 02:28:06.803513 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 02:28:06.803646 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 02:28:06.803659 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 02:28:06.803668 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 02:28:06.803740 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 02:28:06.803752 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 02:28:06.803761 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 02:28:06.803770 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 02:28:06.803779 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 02:28:06.803788 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-02 02:28:06.868726 | orchestrator | + docker version 2026-02-02 02:28:07.157815 | orchestrator | Client: Docker Engine - Community 2026-02-02 02:28:07.157911 | orchestrator | Version: 27.5.1 2026-02-02 02:28:07.157922 | orchestrator | API version: 1.47 2026-02-02 02:28:07.157929 | orchestrator | Go version: go1.22.11 2026-02-02 02:28:07.157936 | orchestrator | Git commit: 9f9e405 2026-02-02 02:28:07.157943 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-02 02:28:07.157951 | orchestrator | OS/Arch: linux/amd64 2026-02-02 02:28:07.157957 | orchestrator | Context: default 2026-02-02 02:28:07.157962 | orchestrator | 2026-02-02 02:28:07.157969 | orchestrator | Server: Docker Engine - Community 2026-02-02 02:28:07.157976 | orchestrator | Engine: 2026-02-02 02:28:07.157983 | orchestrator | Version: 27.5.1 2026-02-02 02:28:07.157990 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-02 02:28:07.158124 | orchestrator | Go version: go1.22.11 2026-02-02 02:28:07.158133 | orchestrator | Git commit: 4c9b3b0 2026-02-02 02:28:07.158137 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-02 02:28:07.158141 | orchestrator | OS/Arch: linux/amd64 2026-02-02 02:28:07.158144 | orchestrator | Experimental: false 2026-02-02 02:28:07.158148 | orchestrator | containerd: 2026-02-02 02:28:07.158153 | orchestrator | Version: v2.2.1 2026-02-02 02:28:07.158157 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-02 02:28:07.158161 | orchestrator | runc: 2026-02-02 02:28:07.158165 | orchestrator | Version: 1.3.4 2026-02-02 02:28:07.158169 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-02 02:28:07.158173 | orchestrator | docker-init: 2026-02-02 02:28:07.158177 | orchestrator | Version: 0.19.0 2026-02-02 02:28:07.158182 | orchestrator | GitCommit: de40ad0 2026-02-02 02:28:07.163018 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-02 02:28:07.171934 | orchestrator | + set -e 2026-02-02 02:28:07.172025 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 02:28:07.172036 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 02:28:07.172079 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 02:28:07.172088 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 02:28:07.172096 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 02:28:07.172115 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 02:28:07.172125 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 02:28:07.172133 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 02:28:07.172141 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 02:28:07.172149 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-02 02:28:07.172157 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-02 02:28:07.172165 | orchestrator | ++ export ARA=false 2026-02-02 02:28:07.172173 | orchestrator | ++ ARA=false 2026-02-02 02:28:07.172182 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 02:28:07.172189 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 02:28:07.172197 | orchestrator | ++ export TEMPEST=false 2026-02-02 02:28:07.172205 | orchestrator | ++ TEMPEST=false 2026-02-02 02:28:07.172213 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 02:28:07.172220 | orchestrator | ++ IS_ZUUL=true 2026-02-02 02:28:07.172228 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 02:28:07.172237 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 02:28:07.172244 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 02:28:07.172252 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 02:28:07.172260 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 02:28:07.172268 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 02:28:07.172276 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 02:28:07.172284 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 02:28:07.172291 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 02:28:07.172299 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 02:28:07.172307 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 02:28:07.172315 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 02:28:07.172323 | orchestrator | ++ INTERACTIVE=false 2026-02-02 02:28:07.172330 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 02:28:07.172342 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 02:28:07.172350 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-02 02:28:07.172358 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-02 02:28:07.180659 | orchestrator | + set -e 2026-02-02 02:28:07.180758 | orchestrator | + VERSION=9.5.0 2026-02-02 02:28:07.180773 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-02 02:28:07.187719 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-02 02:28:07.187823 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-02 02:28:07.193116 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-02 02:28:07.200922 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-02 02:28:07.207493 | orchestrator | /opt/configuration ~ 2026-02-02 02:28:07.207576 | orchestrator | + set -e 2026-02-02 02:28:07.207585 | orchestrator | + pushd /opt/configuration 2026-02-02 02:28:07.207592 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-02 02:28:07.209175 | orchestrator | + source /opt/venv/bin/activate 2026-02-02 02:28:07.210394 | orchestrator | ++ deactivate nondestructive 2026-02-02 02:28:07.210425 | orchestrator | ++ '[' -n '' ']' 2026-02-02 02:28:07.210431 | orchestrator | ++ '[' -n '' ']' 2026-02-02 02:28:07.210454 | orchestrator | ++ hash -r 2026-02-02 02:28:07.210459 | orchestrator | ++ '[' -n '' ']' 2026-02-02 02:28:07.210463 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-02 02:28:07.210472 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-02 02:28:07.210476 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-02 02:28:07.210545 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-02 02:28:07.210551 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-02 02:28:07.210555 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-02 02:28:07.210559 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-02 02:28:07.210564 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 02:28:07.210648 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 02:28:07.210712 | orchestrator | ++ export PATH 2026-02-02 02:28:07.210721 | orchestrator | ++ '[' -n '' ']' 2026-02-02 02:28:07.210725 | orchestrator | ++ '[' -z '' ']' 2026-02-02 02:28:07.210734 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-02 02:28:07.210738 | orchestrator | ++ PS1='(venv) ' 2026-02-02 02:28:07.210742 | orchestrator | ++ export PS1 2026-02-02 02:28:07.210746 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-02 02:28:07.210749 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-02 02:28:07.210753 | orchestrator | ++ hash -r 2026-02-02 02:28:07.210845 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-02 02:28:08.521072 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-02 02:28:08.522208 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-02 02:28:08.523829 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-02 02:28:08.525640 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-02 02:28:08.527024 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-02 02:28:08.540957 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-02 02:28:08.543035 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-02 02:28:08.544227 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-02 02:28:08.545650 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-02 02:28:08.582955 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-02 02:28:08.589023 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-02 02:28:08.593849 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-02 02:28:08.596706 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-02 02:28:08.604305 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-02 02:28:08.859896 | orchestrator | ++ which gilt 2026-02-02 02:28:08.863244 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-02 02:28:08.863287 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-02 02:28:09.129605 | orchestrator | osism.cfg-generics: 2026-02-02 02:28:09.305339 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-02 02:28:09.305429 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-02 02:28:09.305704 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-02 02:28:09.305721 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-02 02:28:10.099377 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-02 02:28:10.111382 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-02 02:28:10.481571 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-02 02:28:10.534642 | orchestrator | ~ 2026-02-02 02:28:10.534738 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-02 02:28:10.534752 | orchestrator | + deactivate 2026-02-02 02:28:10.534763 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-02 02:28:10.534785 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 02:28:10.534794 | orchestrator | + export PATH 2026-02-02 02:28:10.534803 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-02 02:28:10.534812 | orchestrator | + '[' -n '' ']' 2026-02-02 02:28:10.534823 | orchestrator | + hash -r 2026-02-02 02:28:10.534832 | orchestrator | + '[' -n '' ']' 2026-02-02 02:28:10.534840 | orchestrator | + unset VIRTUAL_ENV 2026-02-02 02:28:10.534849 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-02 02:28:10.534857 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-02 02:28:10.534865 | orchestrator | + unset -f deactivate 2026-02-02 02:28:10.534874 | orchestrator | + popd 2026-02-02 02:28:10.536772 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-02 02:28:10.536811 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-02 02:28:10.537140 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-02 02:28:10.579685 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-02 02:28:10.579830 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-02 02:28:10.579923 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-02 02:28:10.619874 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-02 02:28:10.620169 | orchestrator | ++ semver 2024.2 2025.1 2026-02-02 02:28:10.656466 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-02 02:28:10.656546 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-02 02:28:10.727745 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-02 02:28:10.727849 | orchestrator | + source /opt/venv/bin/activate 2026-02-02 02:28:10.727864 | orchestrator | ++ deactivate nondestructive 2026-02-02 02:28:10.727889 | orchestrator | ++ '[' -n '' ']' 2026-02-02 02:28:10.727901 | orchestrator | ++ '[' -n '' ']' 2026-02-02 02:28:10.727912 | orchestrator | ++ hash -r 2026-02-02 02:28:10.727923 | orchestrator | ++ '[' -n '' ']' 2026-02-02 02:28:10.727935 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-02 02:28:10.727946 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-02 02:28:10.727957 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-02 02:28:10.728122 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-02 02:28:10.728142 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-02 02:28:10.728153 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-02 02:28:10.728165 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-02 02:28:10.728182 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 02:28:10.728255 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 02:28:10.728451 | orchestrator | ++ export PATH 2026-02-02 02:28:10.728469 | orchestrator | ++ '[' -n '' ']' 2026-02-02 02:28:10.728551 | orchestrator | ++ '[' -z '' ']' 2026-02-02 02:28:10.728797 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-02 02:28:10.728829 | orchestrator | ++ PS1='(venv) ' 2026-02-02 02:28:10.728849 | orchestrator | ++ export PS1 2026-02-02 02:28:10.728873 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-02 02:28:10.728885 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-02 02:28:10.728896 | orchestrator | ++ hash -r 2026-02-02 02:28:10.729092 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-02 02:28:12.043862 | orchestrator | 2026-02-02 02:28:12.043949 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-02 02:28:12.043957 | orchestrator | 2026-02-02 02:28:12.043962 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-02 02:28:12.686275 | orchestrator | ok: [testbed-manager] 2026-02-02 02:28:12.686412 | orchestrator | 2026-02-02 02:28:12.686440 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-02 02:28:13.690538 | orchestrator | changed: [testbed-manager] 2026-02-02 02:28:13.690670 | orchestrator | 2026-02-02 02:28:13.690686 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-02 02:28:13.690734 | orchestrator | 2026-02-02 02:28:13.690746 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 02:28:17.123156 | orchestrator | ok: [testbed-manager] 2026-02-02 02:28:17.123287 | orchestrator | 2026-02-02 02:28:17.123305 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-02 02:28:17.177594 | orchestrator | ok: [testbed-manager] 2026-02-02 02:28:17.177693 | orchestrator | 2026-02-02 02:28:17.177708 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-02 02:28:17.710577 | orchestrator | changed: [testbed-manager] 2026-02-02 02:28:17.710683 | orchestrator | 2026-02-02 02:28:17.710703 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-02 02:28:17.757214 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:28:17.757310 | orchestrator | 2026-02-02 02:28:17.757325 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-02 02:28:18.126732 | orchestrator | changed: [testbed-manager] 2026-02-02 02:28:18.126830 | orchestrator | 2026-02-02 02:28:18.126844 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-02 02:28:18.465258 | orchestrator | ok: [testbed-manager] 2026-02-02 02:28:18.465357 | orchestrator | 2026-02-02 02:28:18.465371 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-02 02:28:18.597476 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:28:18.597554 | orchestrator | 2026-02-02 02:28:18.597564 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-02 02:28:18.597572 | orchestrator | 2026-02-02 02:28:18.597580 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 02:28:20.438793 | orchestrator | ok: [testbed-manager] 2026-02-02 02:28:20.438874 | orchestrator | 2026-02-02 02:28:20.438884 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-02 02:28:20.565108 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-02 02:28:20.565204 | orchestrator | 2026-02-02 02:28:20.565218 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-02 02:28:20.631520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-02 02:28:20.631616 | orchestrator | 2026-02-02 02:28:20.631632 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-02 02:28:21.799000 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-02 02:28:21.799114 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-02 02:28:21.799128 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-02 02:28:21.799136 | orchestrator | 2026-02-02 02:28:21.799146 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-02 02:28:23.711716 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-02 02:28:23.711795 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-02 02:28:23.711802 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-02 02:28:23.711807 | orchestrator | 2026-02-02 02:28:23.711812 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-02 02:28:24.391558 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-02 02:28:24.391660 | orchestrator | changed: [testbed-manager] 2026-02-02 02:28:24.391676 | orchestrator | 2026-02-02 02:28:24.391689 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-02 02:28:25.041040 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-02 02:28:25.041190 | orchestrator | changed: [testbed-manager] 2026-02-02 02:28:25.041206 | orchestrator | 2026-02-02 02:28:25.041222 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-02 02:28:25.101733 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:28:25.101813 | orchestrator | 2026-02-02 02:28:25.101824 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-02 02:28:25.498330 | orchestrator | ok: [testbed-manager] 2026-02-02 02:28:25.498418 | orchestrator | 2026-02-02 02:28:25.498432 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-02 02:28:25.588714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-02 02:28:25.588807 | orchestrator | 2026-02-02 02:28:25.588823 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-02 02:28:26.751450 | orchestrator | changed: [testbed-manager] 2026-02-02 02:28:26.751549 | orchestrator | 2026-02-02 02:28:26.751560 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-02 02:28:27.676552 | orchestrator | changed: [testbed-manager] 2026-02-02 02:28:27.676646 | orchestrator | 2026-02-02 02:28:27.676659 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-02 02:28:45.451439 | orchestrator | changed: [testbed-manager] 2026-02-02 02:28:45.451567 | orchestrator | 2026-02-02 02:28:45.451583 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-02 02:28:45.510173 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:28:45.510291 | orchestrator | 2026-02-02 02:28:45.510321 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-02 02:28:45.510332 | orchestrator | 2026-02-02 02:28:45.510340 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 02:28:47.279051 | orchestrator | ok: [testbed-manager] 2026-02-02 02:28:47.279153 | orchestrator | 2026-02-02 02:28:47.279167 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-02 02:28:47.404793 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-02 02:28:47.404875 | orchestrator | 2026-02-02 02:28:47.404886 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-02 02:28:47.470826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 02:28:47.470916 | orchestrator | 2026-02-02 02:28:47.470929 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-02 02:28:50.459730 | orchestrator | ok: [testbed-manager] 2026-02-02 02:28:50.459851 | orchestrator | 2026-02-02 02:28:50.459869 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-02 02:28:50.513020 | orchestrator | ok: [testbed-manager] 2026-02-02 02:28:50.513152 | orchestrator | 2026-02-02 02:28:50.513170 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-02 02:28:50.665571 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-02 02:28:50.665706 | orchestrator | 2026-02-02 02:28:50.665740 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-02 02:28:53.706299 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-02 02:28:53.706392 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-02 02:28:53.706405 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-02 02:28:53.706416 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-02 02:28:53.706425 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-02 02:28:53.706434 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-02 02:28:53.706443 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-02 02:28:53.706452 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-02 02:28:53.706462 | orchestrator | 2026-02-02 02:28:53.706472 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-02 02:28:54.350486 | orchestrator | changed: [testbed-manager] 2026-02-02 02:28:54.350603 | orchestrator | 2026-02-02 02:28:54.350633 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-02 02:28:55.007766 | orchestrator | changed: [testbed-manager] 2026-02-02 02:28:55.007868 | orchestrator | 2026-02-02 02:28:55.007884 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-02 02:28:55.084742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-02 02:28:55.084814 | orchestrator | 2026-02-02 02:28:55.084821 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-02 02:28:56.386116 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-02 02:28:56.386225 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-02 02:28:56.386241 | orchestrator | 2026-02-02 02:28:56.386255 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-02 02:28:57.041310 | orchestrator | changed: [testbed-manager] 2026-02-02 02:28:57.041411 | orchestrator | 2026-02-02 02:28:57.041428 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-02 02:28:57.086432 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:28:57.086534 | orchestrator | 2026-02-02 02:28:57.086552 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-02 02:28:57.165876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-02 02:28:57.165954 | orchestrator | 2026-02-02 02:28:57.165965 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-02 02:28:57.835905 | orchestrator | changed: [testbed-manager] 2026-02-02 02:28:57.835996 | orchestrator | 2026-02-02 02:28:57.836008 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-02 02:28:57.905186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-02 02:28:57.905260 | orchestrator | 2026-02-02 02:28:57.905269 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-02 02:28:59.341967 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-02 02:28:59.342192 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-02 02:28:59.342208 | orchestrator | changed: [testbed-manager] 2026-02-02 02:28:59.342219 | orchestrator | 2026-02-02 02:28:59.342241 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-02 02:29:00.009825 | orchestrator | changed: [testbed-manager] 2026-02-02 02:29:00.009936 | orchestrator | 2026-02-02 02:29:00.009962 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-02 02:29:00.067042 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:29:00.067213 | orchestrator | 2026-02-02 02:29:00.067244 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-02 02:29:00.159942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-02 02:29:00.160041 | orchestrator | 2026-02-02 02:29:00.160058 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-02 02:29:00.722158 | orchestrator | changed: [testbed-manager] 2026-02-02 02:29:00.722261 | orchestrator | 2026-02-02 02:29:00.722278 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-02 02:29:01.138858 | orchestrator | changed: [testbed-manager] 2026-02-02 02:29:01.138951 | orchestrator | 2026-02-02 02:29:01.138963 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-02 02:29:02.397494 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-02 02:29:02.397584 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-02 02:29:02.397596 | orchestrator | 2026-02-02 02:29:02.397605 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-02 02:29:03.071217 | orchestrator | changed: [testbed-manager] 2026-02-02 02:29:03.071301 | orchestrator | 2026-02-02 02:29:03.071312 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-02 02:29:03.493492 | orchestrator | ok: [testbed-manager] 2026-02-02 02:29:03.493586 | orchestrator | 2026-02-02 02:29:03.493600 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-02 02:29:03.912829 | orchestrator | changed: [testbed-manager] 2026-02-02 02:29:03.912934 | orchestrator | 2026-02-02 02:29:03.912949 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-02 02:29:03.963980 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:29:03.964074 | orchestrator | 2026-02-02 02:29:03.964089 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-02 02:29:04.055322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-02 02:29:04.055439 | orchestrator | 2026-02-02 02:29:04.055458 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-02 02:29:04.101584 | orchestrator | ok: [testbed-manager] 2026-02-02 02:29:04.101654 | orchestrator | 2026-02-02 02:29:04.101661 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-02 02:29:06.188775 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-02 02:29:06.188884 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-02 02:29:06.188902 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-02 02:29:06.188915 | orchestrator | 2026-02-02 02:29:06.188927 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-02 02:29:06.887062 | orchestrator | changed: [testbed-manager] 2026-02-02 02:29:06.887259 | orchestrator | 2026-02-02 02:29:06.887281 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-02 02:29:07.616182 | orchestrator | changed: [testbed-manager] 2026-02-02 02:29:07.616273 | orchestrator | 2026-02-02 02:29:07.616281 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-02 02:29:08.376096 | orchestrator | changed: [testbed-manager] 2026-02-02 02:29:08.376229 | orchestrator | 2026-02-02 02:29:08.376246 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-02 02:29:08.442607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-02 02:29:08.442703 | orchestrator | 2026-02-02 02:29:08.442723 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-02 02:29:08.488866 | orchestrator | ok: [testbed-manager] 2026-02-02 02:29:08.488964 | orchestrator | 2026-02-02 02:29:08.488979 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-02 02:29:09.291977 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-02 02:29:09.292075 | orchestrator | 2026-02-02 02:29:09.292089 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-02 02:29:09.374473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-02 02:29:09.374566 | orchestrator | 2026-02-02 02:29:09.374581 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-02 02:29:10.108807 | orchestrator | changed: [testbed-manager] 2026-02-02 02:29:10.108911 | orchestrator | 2026-02-02 02:29:10.108928 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-02 02:29:10.735744 | orchestrator | ok: [testbed-manager] 2026-02-02 02:29:10.735842 | orchestrator | 2026-02-02 02:29:10.735859 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-02 02:29:10.804939 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:29:10.805061 | orchestrator | 2026-02-02 02:29:10.805079 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-02 02:29:10.860072 | orchestrator | ok: [testbed-manager] 2026-02-02 02:29:10.860200 | orchestrator | 2026-02-02 02:29:10.860243 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-02 02:29:11.724145 | orchestrator | changed: [testbed-manager] 2026-02-02 02:29:11.724231 | orchestrator | 2026-02-02 02:29:11.724246 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-02 02:30:15.911572 | orchestrator | changed: [testbed-manager] 2026-02-02 02:30:15.911644 | orchestrator | 2026-02-02 02:30:15.911654 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-02 02:30:16.830420 | orchestrator | ok: [testbed-manager] 2026-02-02 02:30:16.830479 | orchestrator | 2026-02-02 02:30:16.830489 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-02 02:30:16.894363 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:30:16.894421 | orchestrator | 2026-02-02 02:30:16.894430 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-02 02:30:19.716668 | orchestrator | changed: [testbed-manager] 2026-02-02 02:30:19.716752 | orchestrator | 2026-02-02 02:30:19.716767 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-02 02:30:19.775575 | orchestrator | ok: [testbed-manager] 2026-02-02 02:30:19.775667 | orchestrator | 2026-02-02 02:30:19.775684 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-02 02:30:19.775696 | orchestrator | 2026-02-02 02:30:19.775708 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-02 02:30:19.918510 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:30:19.918597 | orchestrator | 2026-02-02 02:30:19.918613 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-02 02:31:19.981632 | orchestrator | Pausing for 60 seconds 2026-02-02 02:31:19.981745 | orchestrator | changed: [testbed-manager] 2026-02-02 02:31:19.981760 | orchestrator | 2026-02-02 02:31:19.981772 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-02 02:31:22.685109 | orchestrator | changed: [testbed-manager] 2026-02-02 02:31:22.685252 | orchestrator | 2026-02-02 02:31:22.685272 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-02 02:32:24.813076 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-02 02:32:24.813205 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-02 02:32:24.813253 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-02 02:32:24.813304 | orchestrator | changed: [testbed-manager] 2026-02-02 02:32:24.813323 | orchestrator | 2026-02-02 02:32:24.813340 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-02 02:32:36.309737 | orchestrator | changed: [testbed-manager] 2026-02-02 02:32:36.309872 | orchestrator | 2026-02-02 02:32:36.309894 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-02 02:32:36.402098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-02 02:32:36.402220 | orchestrator | 2026-02-02 02:32:36.402231 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-02 02:32:36.402245 | orchestrator | 2026-02-02 02:32:36.402257 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-02 02:32:36.445798 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:32:36.445891 | orchestrator | 2026-02-02 02:32:36.445906 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-02 02:32:36.513562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-02 02:32:36.513649 | orchestrator | 2026-02-02 02:32:36.513660 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-02 02:32:37.328072 | orchestrator | changed: [testbed-manager] 2026-02-02 02:32:37.328150 | orchestrator | 2026-02-02 02:32:37.328157 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-02 02:32:40.604331 | orchestrator | ok: [testbed-manager] 2026-02-02 02:32:40.604419 | orchestrator | 2026-02-02 02:32:40.604429 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-02 02:32:40.683967 | orchestrator | ok: [testbed-manager] => { 2026-02-02 02:32:40.684058 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-02 02:32:40.684071 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-02 02:32:40.684081 | orchestrator | "Checking running containers against expected versions...", 2026-02-02 02:32:40.684092 | orchestrator | "", 2026-02-02 02:32:40.684102 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-02 02:32:40.684111 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-02 02:32:40.684121 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.684130 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-02 02:32:40.684138 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.684147 | orchestrator | "", 2026-02-02 02:32:40.684156 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-02 02:32:40.684188 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-02 02:32:40.684197 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.684206 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-02 02:32:40.684215 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.684224 | orchestrator | "", 2026-02-02 02:32:40.684233 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-02 02:32:40.684242 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-02 02:32:40.684250 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.684259 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-02 02:32:40.684300 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.684317 | orchestrator | "", 2026-02-02 02:32:40.684328 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-02 02:32:40.684337 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-02 02:32:40.684346 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.684355 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-02 02:32:40.684364 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.684372 | orchestrator | "", 2026-02-02 02:32:40.684383 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-02 02:32:40.684392 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-02 02:32:40.684400 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.684409 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-02 02:32:40.684417 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.684426 | orchestrator | "", 2026-02-02 02:32:40.684435 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-02 02:32:40.684443 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-02 02:32:40.684452 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.684461 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-02 02:32:40.684469 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.684483 | orchestrator | "", 2026-02-02 02:32:40.684497 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-02 02:32:40.684511 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-02 02:32:40.684526 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.684542 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-02 02:32:40.684556 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.684566 | orchestrator | "", 2026-02-02 02:32:40.684577 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-02 02:32:40.684587 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-02 02:32:40.684597 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.684607 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-02 02:32:40.684618 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.684633 | orchestrator | "", 2026-02-02 02:32:40.684648 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-02 02:32:40.684662 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-02 02:32:40.684676 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.684691 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-02 02:32:40.684705 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.684718 | orchestrator | "", 2026-02-02 02:32:40.684733 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-02 02:32:40.684748 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-02 02:32:40.684808 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.684824 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-02 02:32:40.684837 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.684865 | orchestrator | "", 2026-02-02 02:32:40.684880 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-02 02:32:40.684906 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-02 02:32:40.684920 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.684934 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-02 02:32:40.684949 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.684963 | orchestrator | "", 2026-02-02 02:32:40.684976 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-02 02:32:40.684990 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-02 02:32:40.685003 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.685017 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-02 02:32:40.685031 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.685045 | orchestrator | "", 2026-02-02 02:32:40.685060 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-02 02:32:40.685073 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-02 02:32:40.685087 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.685102 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-02 02:32:40.685115 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.685129 | orchestrator | "", 2026-02-02 02:32:40.685143 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-02 02:32:40.685157 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-02 02:32:40.685170 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.685184 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-02 02:32:40.685220 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.685235 | orchestrator | "", 2026-02-02 02:32:40.685249 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-02 02:32:40.685263 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-02 02:32:40.685371 | orchestrator | " Enabled: true", 2026-02-02 02:32:40.685398 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-02 02:32:40.685413 | orchestrator | " Status: ✅ MATCH", 2026-02-02 02:32:40.685426 | orchestrator | "", 2026-02-02 02:32:40.685440 | orchestrator | "=== Summary ===", 2026-02-02 02:32:40.685454 | orchestrator | "Errors (version mismatches): 0", 2026-02-02 02:32:40.685468 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-02 02:32:40.685482 | orchestrator | "", 2026-02-02 02:32:40.685496 | orchestrator | "✅ All running containers match expected versions!" 2026-02-02 02:32:40.685509 | orchestrator | ] 2026-02-02 02:32:40.685524 | orchestrator | } 2026-02-02 02:32:40.685538 | orchestrator | 2026-02-02 02:32:40.685553 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-02 02:32:40.739721 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:32:40.739818 | orchestrator | 2026-02-02 02:32:40.739834 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:32:40.739848 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-02 02:32:40.739860 | orchestrator | 2026-02-02 02:32:40.875875 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-02 02:32:40.875958 | orchestrator | + deactivate 2026-02-02 02:32:40.875971 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-02 02:32:40.875982 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 02:32:40.875990 | orchestrator | + export PATH 2026-02-02 02:32:40.875998 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-02 02:32:40.876006 | orchestrator | + '[' -n '' ']' 2026-02-02 02:32:40.876014 | orchestrator | + hash -r 2026-02-02 02:32:40.876022 | orchestrator | + '[' -n '' ']' 2026-02-02 02:32:40.876030 | orchestrator | + unset VIRTUAL_ENV 2026-02-02 02:32:40.876038 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-02 02:32:40.876046 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-02 02:32:40.876054 | orchestrator | + unset -f deactivate 2026-02-02 02:32:40.876063 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-02 02:32:40.886724 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-02 02:32:40.886800 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-02 02:32:40.886833 | orchestrator | + local max_attempts=60 2026-02-02 02:32:40.886840 | orchestrator | + local name=ceph-ansible 2026-02-02 02:32:40.886847 | orchestrator | + local attempt_num=1 2026-02-02 02:32:40.888001 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:32:40.931232 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 02:32:40.931347 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-02 02:32:40.931358 | orchestrator | + local max_attempts=60 2026-02-02 02:32:40.931366 | orchestrator | + local name=kolla-ansible 2026-02-02 02:32:40.931373 | orchestrator | + local attempt_num=1 2026-02-02 02:32:40.933194 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-02 02:32:40.969242 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 02:32:40.969412 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-02 02:32:40.969427 | orchestrator | + local max_attempts=60 2026-02-02 02:32:40.969436 | orchestrator | + local name=osism-ansible 2026-02-02 02:32:40.969444 | orchestrator | + local attempt_num=1 2026-02-02 02:32:40.970127 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-02 02:32:41.007610 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 02:32:41.007689 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-02 02:32:41.007699 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-02 02:32:41.729412 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-02 02:32:41.892076 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-02 02:32:41.892186 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-02 02:32:41.892216 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-02 02:32:41.892235 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-02 02:32:41.892255 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-02 02:32:41.892354 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-02 02:32:41.892376 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-02 02:32:41.892394 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-02 02:32:41.892412 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-02 02:32:41.892430 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-02 02:32:41.892447 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-02 02:32:41.892464 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-02 02:32:41.892481 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-02 02:32:41.892531 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-02 02:32:41.892549 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-02 02:32:41.892566 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-02 02:32:41.900795 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-02 02:32:41.966538 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-02 02:32:41.966620 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-02 02:32:41.972071 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-02 02:32:54.308537 | orchestrator | 2026-02-02 02:32:54 | INFO  | Task c8f8a387-2e4d-404a-bf32-546c3b64f558 (resolvconf) was prepared for execution. 2026-02-02 02:32:54.308627 | orchestrator | 2026-02-02 02:32:54 | INFO  | It takes a moment until task c8f8a387-2e4d-404a-bf32-546c3b64f558 (resolvconf) has been started and output is visible here. 2026-02-02 02:33:09.257076 | orchestrator | 2026-02-02 02:33:09.257190 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-02 02:33:09.257207 | orchestrator | 2026-02-02 02:33:09.257220 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 02:33:09.257232 | orchestrator | Monday 02 February 2026 02:32:58 +0000 (0:00:00.152) 0:00:00.152 ******* 2026-02-02 02:33:09.257243 | orchestrator | ok: [testbed-manager] 2026-02-02 02:33:09.257255 | orchestrator | 2026-02-02 02:33:09.257266 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-02 02:33:09.257278 | orchestrator | Monday 02 February 2026 02:33:02 +0000 (0:00:04.011) 0:00:04.164 ******* 2026-02-02 02:33:09.257341 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:33:09.257355 | orchestrator | 2026-02-02 02:33:09.257367 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-02 02:33:09.257378 | orchestrator | Monday 02 February 2026 02:33:02 +0000 (0:00:00.061) 0:00:04.225 ******* 2026-02-02 02:33:09.257389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-02 02:33:09.257400 | orchestrator | 2026-02-02 02:33:09.257411 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-02 02:33:09.257422 | orchestrator | Monday 02 February 2026 02:33:02 +0000 (0:00:00.088) 0:00:04.314 ******* 2026-02-02 02:33:09.257454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 02:33:09.257466 | orchestrator | 2026-02-02 02:33:09.257477 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-02 02:33:09.257488 | orchestrator | Monday 02 February 2026 02:33:02 +0000 (0:00:00.091) 0:00:04.406 ******* 2026-02-02 02:33:09.257499 | orchestrator | ok: [testbed-manager] 2026-02-02 02:33:09.257522 | orchestrator | 2026-02-02 02:33:09.257533 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-02 02:33:09.257544 | orchestrator | Monday 02 February 2026 02:33:04 +0000 (0:00:01.197) 0:00:05.604 ******* 2026-02-02 02:33:09.257555 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:33:09.257566 | orchestrator | 2026-02-02 02:33:09.257577 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-02 02:33:09.257588 | orchestrator | Monday 02 February 2026 02:33:04 +0000 (0:00:00.067) 0:00:05.671 ******* 2026-02-02 02:33:09.257633 | orchestrator | ok: [testbed-manager] 2026-02-02 02:33:09.257668 | orchestrator | 2026-02-02 02:33:09.257688 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-02 02:33:09.257702 | orchestrator | Monday 02 February 2026 02:33:04 +0000 (0:00:00.555) 0:00:06.227 ******* 2026-02-02 02:33:09.257716 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:33:09.257729 | orchestrator | 2026-02-02 02:33:09.257743 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-02 02:33:09.257756 | orchestrator | Monday 02 February 2026 02:33:04 +0000 (0:00:00.079) 0:00:06.307 ******* 2026-02-02 02:33:09.257769 | orchestrator | changed: [testbed-manager] 2026-02-02 02:33:09.257782 | orchestrator | 2026-02-02 02:33:09.257795 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-02 02:33:09.257808 | orchestrator | Monday 02 February 2026 02:33:05 +0000 (0:00:00.562) 0:00:06.869 ******* 2026-02-02 02:33:09.257820 | orchestrator | changed: [testbed-manager] 2026-02-02 02:33:09.257833 | orchestrator | 2026-02-02 02:33:09.257845 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-02 02:33:09.257859 | orchestrator | Monday 02 February 2026 02:33:06 +0000 (0:00:01.237) 0:00:08.106 ******* 2026-02-02 02:33:09.257872 | orchestrator | ok: [testbed-manager] 2026-02-02 02:33:09.257885 | orchestrator | 2026-02-02 02:33:09.257898 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-02 02:33:09.257910 | orchestrator | Monday 02 February 2026 02:33:07 +0000 (0:00:01.054) 0:00:09.161 ******* 2026-02-02 02:33:09.257923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-02 02:33:09.257941 | orchestrator | 2026-02-02 02:33:09.257959 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-02 02:33:09.257978 | orchestrator | Monday 02 February 2026 02:33:07 +0000 (0:00:00.087) 0:00:09.248 ******* 2026-02-02 02:33:09.257994 | orchestrator | changed: [testbed-manager] 2026-02-02 02:33:09.258012 | orchestrator | 2026-02-02 02:33:09.258119 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:33:09.258139 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 02:33:09.258158 | orchestrator | 2026-02-02 02:33:09.258177 | orchestrator | 2026-02-02 02:33:09.258195 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:33:09.258213 | orchestrator | Monday 02 February 2026 02:33:08 +0000 (0:00:01.223) 0:00:10.471 ******* 2026-02-02 02:33:09.258231 | orchestrator | =============================================================================== 2026-02-02 02:33:09.258250 | orchestrator | Gathering Facts --------------------------------------------------------- 4.01s 2026-02-02 02:33:09.258269 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.24s 2026-02-02 02:33:09.258314 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.22s 2026-02-02 02:33:09.258334 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.20s 2026-02-02 02:33:09.258352 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.05s 2026-02-02 02:33:09.258371 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2026-02-02 02:33:09.258415 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.56s 2026-02-02 02:33:09.258437 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-02-02 02:33:09.258455 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-02-02 02:33:09.258474 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-02-02 02:33:09.258493 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-02-02 02:33:09.258511 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-02-02 02:33:09.258547 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-02-02 02:33:09.609554 | orchestrator | + osism apply sshconfig 2026-02-02 02:33:21.722459 | orchestrator | 2026-02-02 02:33:21 | INFO  | Task 26f0812f-2ecf-4128-82cf-c225a5e5a345 (sshconfig) was prepared for execution. 2026-02-02 02:33:21.722585 | orchestrator | 2026-02-02 02:33:21 | INFO  | It takes a moment until task 26f0812f-2ecf-4128-82cf-c225a5e5a345 (sshconfig) has been started and output is visible here. 2026-02-02 02:33:34.116245 | orchestrator | 2026-02-02 02:33:34.116424 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-02 02:33:34.116448 | orchestrator | 2026-02-02 02:33:34.116486 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-02 02:33:34.116502 | orchestrator | Monday 02 February 2026 02:33:26 +0000 (0:00:00.171) 0:00:00.171 ******* 2026-02-02 02:33:34.116511 | orchestrator | ok: [testbed-manager] 2026-02-02 02:33:34.116520 | orchestrator | 2026-02-02 02:33:34.116528 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-02 02:33:34.116536 | orchestrator | Monday 02 February 2026 02:33:26 +0000 (0:00:00.624) 0:00:00.795 ******* 2026-02-02 02:33:34.116544 | orchestrator | changed: [testbed-manager] 2026-02-02 02:33:34.116553 | orchestrator | 2026-02-02 02:33:34.116562 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-02 02:33:34.116575 | orchestrator | Monday 02 February 2026 02:33:27 +0000 (0:00:00.521) 0:00:01.317 ******* 2026-02-02 02:33:34.116587 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-02 02:33:34.116598 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-02 02:33:34.116611 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-02 02:33:34.116623 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-02 02:33:34.116635 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-02 02:33:34.116647 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-02 02:33:34.116660 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-02 02:33:34.116672 | orchestrator | 2026-02-02 02:33:34.116685 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-02 02:33:34.116698 | orchestrator | Monday 02 February 2026 02:33:33 +0000 (0:00:05.893) 0:00:07.210 ******* 2026-02-02 02:33:34.116711 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:33:34.116724 | orchestrator | 2026-02-02 02:33:34.116737 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-02 02:33:34.116750 | orchestrator | Monday 02 February 2026 02:33:33 +0000 (0:00:00.072) 0:00:07.283 ******* 2026-02-02 02:33:34.116765 | orchestrator | changed: [testbed-manager] 2026-02-02 02:33:34.116778 | orchestrator | 2026-02-02 02:33:34.116793 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:33:34.116807 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:33:34.116818 | orchestrator | 2026-02-02 02:33:34.116827 | orchestrator | 2026-02-02 02:33:34.116837 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:33:34.116846 | orchestrator | Monday 02 February 2026 02:33:33 +0000 (0:00:00.613) 0:00:07.897 ******* 2026-02-02 02:33:34.116855 | orchestrator | =============================================================================== 2026-02-02 02:33:34.116864 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.89s 2026-02-02 02:33:34.116874 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.62s 2026-02-02 02:33:34.116882 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2026-02-02 02:33:34.116891 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2026-02-02 02:33:34.116922 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-02-02 02:33:34.465935 | orchestrator | + osism apply known-hosts 2026-02-02 02:33:46.582850 | orchestrator | 2026-02-02 02:33:46 | INFO  | Task 112cb95f-e2a5-4796-85aa-765fc303a19e (known-hosts) was prepared for execution. 2026-02-02 02:33:46.582963 | orchestrator | 2026-02-02 02:33:46 | INFO  | It takes a moment until task 112cb95f-e2a5-4796-85aa-765fc303a19e (known-hosts) has been started and output is visible here. 2026-02-02 02:34:04.137460 | orchestrator | 2026-02-02 02:34:04.137541 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-02 02:34:04.137550 | orchestrator | 2026-02-02 02:34:04.137556 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-02 02:34:04.137563 | orchestrator | Monday 02 February 2026 02:33:51 +0000 (0:00:00.187) 0:00:00.187 ******* 2026-02-02 02:34:04.137569 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-02 02:34:04.137576 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-02 02:34:04.137581 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-02 02:34:04.137587 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-02 02:34:04.137592 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-02 02:34:04.137598 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-02 02:34:04.137604 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-02 02:34:04.137609 | orchestrator | 2026-02-02 02:34:04.137615 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-02 02:34:04.137621 | orchestrator | Monday 02 February 2026 02:33:57 +0000 (0:00:06.115) 0:00:06.302 ******* 2026-02-02 02:34:04.137627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-02 02:34:04.137635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-02 02:34:04.137640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-02 02:34:04.137646 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-02 02:34:04.137651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-02 02:34:04.137662 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-02 02:34:04.137668 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-02 02:34:04.137674 | orchestrator | 2026-02-02 02:34:04.137679 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:04.137685 | orchestrator | Monday 02 February 2026 02:33:57 +0000 (0:00:00.175) 0:00:06.478 ******* 2026-02-02 02:34:04.137691 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBASmbSyja/QY99PBxnuUP9S1Q4fKcdX5INhiYrDjwV3mgqya1ncrTfM7LeTHRsZfE54K8YKIYPpJBv1CrrJw9fo=) 2026-02-02 02:34:04.137702 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnZ9M0em3zslDY7blgf48kZ04lzzvJel7FN6V0qR2qoJtxS17y4W8sv7a7FmVJBB3TSVaR98ei5fja6L32IWWoMFQExy7QFytSEzgiuV+mMX23maTdJe4XvyC94NVIs28xSeiBNTeLDnpje1OdaZy+08Snf2/cRoj4pF015UYlTK/QC5ncYaWsrMDIMFf6giA9Zp/rbcQsoJcQe6NMpT7zrdJH65zuzD7PuXl8jBFmNOaBhs6AUKarWDoYFmPmIbY8S7ht0aMH2bkkzCFfC87C+K6yDztkZDvjlGV8jdAF969IJV2CYubaqlcTT1/y3smXo3RdnOeLv2ikarPeSciQxj8tD1OuZUtoFyQqx7O/z3O/jSQvE276Aoz1WxYTfMcQRZFptaze8ZmCW1ez35t0/4XGFuM+dUTOlbvK3MzdyA2zYhuyePfFvWqJci5mcH8bRF6JCBspR757qXHNP9TBseruFo3j+xIzIwQK81dy+B0jJEUQ2vi86a49jHllaSk=) 2026-02-02 02:34:04.137728 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC7fm+yE9J+hOqf8beUtP94uKJhsf8Ze3QUAp65x63t8) 2026-02-02 02:34:04.137738 | orchestrator | 2026-02-02 02:34:04.137746 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:04.137755 | orchestrator | Monday 02 February 2026 02:33:58 +0000 (0:00:01.235) 0:00:07.714 ******* 2026-02-02 02:34:04.137764 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL3eZfZ3GDgRJPEJrq5OKK1F364ykrqPp1TgtAX7EoAm/630GrXIhG1vurHbq7gJyfXYzGNl7XzGuAEKEXqm4Iw=) 2026-02-02 02:34:04.137773 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHS54ycktksQdOKaK7uDnZdWKxXvpV5s27onAG7Mja15) 2026-02-02 02:34:04.137798 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJR+ZrgU5rr4lLEwBOn4dPj5OYBzhcmZLbNWt63tR4vHZs0rjdNa1Vgph7GBonP+WzNFkuM7H1uJg7tjFViH4WZOL/xMkcWoHuh0K8rF7JhOQtqVWRp3jTclratrKYot/EWhqgJDir3HlhYyyq63YRa2njw5Zc+Bz5qAu1Z93o0li4ROotOoSzOkcW02C3Kxi0BjeFs2m8jtB1gIJSY5isR4qabLIx+7p7uZxHDfsbUW7phu7fJB8Rv8Ydlg/XOLz29S9Q1MvhCNZhwinXDe0wd7mWTNb/EQcfW/xxwFlT0qLoZd1Usvjl7xccJvSyTp1p/V/j4w7lLFeGTuUMEIbxgoUxetQBfV7V9sbaemZ0T9otneCWHgrBS3HYI21yQ6P5jwM+r+d2eh5d01pbgkdp8xhsuvOOAqPIKufJaTqKeUfZ6MdQfRKtf1qxOy+Q/FWOoKnKQK0ABJcnJvNV16VSHPV3AmuNY+vimaoFO+pNO3kh11SuC9VHF6ht25JdLAU=) 2026-02-02 02:34:04.137805 | orchestrator | 2026-02-02 02:34:04.137810 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:04.137816 | orchestrator | Monday 02 February 2026 02:33:59 +0000 (0:00:01.096) 0:00:08.810 ******* 2026-02-02 02:34:04.137821 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+3FKf+gBj8HTeMBlQ7dhh6Hd59Per9HRgA6ruyzwYEzzI6Rk7eMhf9gBLY+Of1bpmlkAacKWmVvhG5LRhmi2g=) 2026-02-02 02:34:04.137827 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNxZdmlZrPZcRMI2t7qZatknh6fN1QSi1nX2i2lECt62mWU9HgLA2SzFyhjnC2X+V5amjiBCozogI+pPHBSO2VTnu+RfWAzyMYdvuZHI+IzXvE2Vw3DtuKWhUz9D/OI8avylsWV477saGFQZo5KFy2R9/SZyNaLCcxGm70DveUC3LO7WqooRv/LiFMoPuZnU6jauXfiX1cps2W+2tnZ8rBCU8GYgKNZWKwHkWq3kCeccR7i32CZAcOTfbtlA394Pm0heisTlkhKASNgDfjASjiTWWDGWZsVbhJazqIukeI99+ESj26uwJILEsT4s28lRqxC5eUdV0iQ7TprnjdYczsvhXFlzLJLOungueFw1Sy0TYUTcPpdEdqPOVp9tmoh4jUPo8ZJ56z7mgpbpi51GLK/ZltVf3KqYJCziiXd4nIx+ajyLXSvYB23CelKeCDT0IweHF56pUWLluZ1Dt01TDhNPEep7qM0oWKTipP3A+s62fuObB/Y9kkV/MxgkZtous=) 2026-02-02 02:34:04.137833 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDRFpJmpXWgiX78LuAPXrAJq+AbNo+O4VBEHU046klzp) 2026-02-02 02:34:04.137839 | orchestrator | 2026-02-02 02:34:04.137844 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:04.137850 | orchestrator | Monday 02 February 2026 02:34:00 +0000 (0:00:01.131) 0:00:09.941 ******* 2026-02-02 02:34:04.137856 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4LjbaDpjlAPRocIWiSKRfY01N3ZcLfYFm6af5LWc+Ki5PR1vu+hSGHN3szYAdvuBxgGI+lkejIqnN+WmMOFtHO0mNX1jc9SVHPsaTEK3UvDL6rPDp8ip5x7gb1qpKB3YxEajtcLfRoOyPwGNbGJk3ChySmthbKwHJ+Xe/7Xt4ltw1x9IIbBG4cIzIYZ1ja74bRLQcpJhikf4B6n3f9mS+jGptFQ6t9Uk5ashNikDECoBAws1VKAyShvCSsIeOeyCfUm+n/3yEtNMcS/dLiqyyzEC8IxM2vSJoPHtYpNTNrUR0hjGPsJwZLPS4RwRK+hdyisFeAMhcR7CHL0hdx1t+P7AwAVt8M6Ku1drqHyH9McIhgu0/vVAl6CPK8wciQZLQ89Cuc30GLy2Dfz04f6uigtvrC8gscMXcOOKYm7pPrFxrya8MoF8ZZx1GW0fIszc07J/tYhmgw9z7jp1Zabb/n5AHKHrPaPLwxAjD+1EXUsrREJUzHKJPw7j895I15mc=) 2026-02-02 02:34:04.137866 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFVmmZXEcvI+s/NXLJAermcLccEdXtebougwm33/GS+7) 2026-02-02 02:34:04.137872 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO9mIqjPPjaqBaXMyETmuFgH67qgpwb8AqPJnqK1jTSbUWcGto0Ej+bVNipwU35GlKIIqsKZdp+nsLL/P+a5L+k=) 2026-02-02 02:34:04.137877 | orchestrator | 2026-02-02 02:34:04.137883 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:04.137888 | orchestrator | Monday 02 February 2026 02:34:01 +0000 (0:00:01.112) 0:00:11.054 ******* 2026-02-02 02:34:04.137936 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC3VHmCbhKvMFhYgOWbVrWYnrjn6uRy3TN8Y1oGCRh53) 2026-02-02 02:34:04.137942 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwNmfB8PnmxeDY0l96nXL3AIPgQmCfzisTSWaviGsvhzTZmy8FATAEi3NIVEBsA+l4gOEVE5Q14IHbzJZlSAwMFCHdanhO68Fey8mw0L7YSW8Pce3uhC0DJD3fa5pgyG8NEkblDu9MHNj1D4MJjGxRbdrC+b3UUvIkesDoNEl8ZooiSRC4AidPuk1+FYa+nqW200G1rRtmW++jlLs44+wcViTZuHxvSVa8cMv5YnlkktDnhUQV358HTqpM3l2i5FqN6NQaTZ3q6Z60fwlh5/cpk+0qSpNQ7wquhCniNKozJbWPsWu2tYXT4BsN77zB7LuCHY7mRuPC05MoDJHRM5TpLLp+jVBvZWy713um+2kD1RlYpY5YWSy+ZqJNjyIWlYHysFC5teTxsuV77AEC7g8JUPPTRu1E+G5U+GcPys5N3apKI8p2Gd0Bxefre4mvNqt5vfzlkZ798Cd4YL8VQivZ/xCgm6RrZa9CS1Zi/oo7gEMWIpD4HIdkB3KChhFLYBk=) 2026-02-02 02:34:04.137948 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC1Uxy4qxlc9DYx+W66HF2RtDmGf3PA42MOfZRVY8iVSrJSu4RPHvijBLhx3X0ChKNjF2Chsa7hE/pI9xGvHvDk=) 2026-02-02 02:34:04.137954 | orchestrator | 2026-02-02 02:34:04.137959 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:04.137965 | orchestrator | Monday 02 February 2026 02:34:03 +0000 (0:00:01.116) 0:00:12.170 ******* 2026-02-02 02:34:04.137975 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCN9LGRUeCdQUWhVeb0Q22NWOWr0+10iZZUL82hJ3JyifiromybjFHQ3avaIOnO1ZbDszIlRI2oC3u5uawPm950UCT4oeWF6jpbRtoD6+b6Tb2Osp+PfuAw7BsxHp1TVVHGJdQ1visTjnqqKHSN6LZJd12S975xkaTgNlYehKL6EKABSPzRLV9aIm8j2I3xWm3c/qJJbzUvqbIGXzXFpbt7sqM6HFAHAWTrSoFYmNl9aRfBBuZMznOXn0GIGDiAqtEgUg6urI5YmtcfBvxVojs+FZDwHFE+RD2j4NzI2sjvEHdUGUdZTqGrWajALHTMyCOVWzzKUa8T8o89yF83VW0kLC+UKt8QfCZMFvp7oAdkhmNJQY4mc3hLvHOUaCJ8H+On6YAQmuNwY7jQIBnXSbKOzRMJvAsbWeVT/4QfDP4UxhyA8h4fsQ1BEfpbhmQbidYjc4dJemlkruVvRhd/hXWXf6kzaqFvCHHc8bM1NJCL0Q/Cp6njn0f2Mm/AtYgoss8=) 2026-02-02 02:34:15.654473 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGVxqWKiELi8hPwVl5U9Jmd7RvQuJptXgpXcejzWJTkOlwySsPK3WAGMB12PO829hf7W8L6tfo5T6iGwBO3iqs4=) 2026-02-02 02:34:15.654606 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILY9LNosPKJgPGTQZBRupvXTOz/BeRqoX0zeBVu7ZZA8) 2026-02-02 02:34:15.654634 | orchestrator | 2026-02-02 02:34:15.654654 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:15.654673 | orchestrator | Monday 02 February 2026 02:34:04 +0000 (0:00:01.124) 0:00:13.295 ******* 2026-02-02 02:34:15.654687 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE7QSXDgNHmg8QZnY9QcBwrRnqP0gHke56yUs6xQXXlPfAtIDTABrJKX110gHr4nTY2B4j+sGrGK3VoDknXL+wg=) 2026-02-02 02:34:15.654700 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTJtj64Nb5A+f8xWH6A5pwYK/x5RH5tqscVQuoTC4QygH3lJ8AAX2Uibujptf78FFuIt5HuyI2fFGPsfIx6emNP9W8K7XdIxEPuwwOWSI7pgfyyvWURLYUQYE0er3HRZD92U86dE8jJtk2ZrUzecShZvqO1xWscDLDPNxzz/lmVWtZiwN+TOkqxUq9q9R0/yJc6i8h+URfjJ51LtKkWikVTLw/H42GI0zHaituXGTvZgcfa0F4FrIN3zk7ae9QZdicgD0Y3nbtOnsFVFSG5ohP0AGBw5Jh2DSrihBmxTUfBPVtFE76lhXtIJLa8MuCT0aLSf8oS+atdGOsx1AoijNQvSzNQPUb2VcSqEqlxWvHtwKjohHkFX1XsaxqFU98Ht4UhOY2wn+v28fT0oJNJkZaZpe3yG+OFDScVOQs04sLUw+XVkasILmmv93WtllslLj6mfIY5fT/P+rkpSSOdHNItKM0zXQEZYXz/JYowd2PPyTPzZm4ZT1DKHuuvlnoVlU=) 2026-02-02 02:34:15.654737 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHFL5m42Z/B+78y3+G4OOduA8722ZxtC3OqWGs68EwIR) 2026-02-02 02:34:15.654748 | orchestrator | 2026-02-02 02:34:15.654758 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-02 02:34:15.654768 | orchestrator | Monday 02 February 2026 02:34:05 +0000 (0:00:01.127) 0:00:14.423 ******* 2026-02-02 02:34:15.654778 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-02 02:34:15.654788 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-02 02:34:15.654797 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-02 02:34:15.654807 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-02 02:34:15.654816 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-02 02:34:15.654826 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-02 02:34:15.654835 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-02 02:34:15.654845 | orchestrator | 2026-02-02 02:34:15.654855 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-02 02:34:15.654865 | orchestrator | Monday 02 February 2026 02:34:10 +0000 (0:00:05.596) 0:00:20.020 ******* 2026-02-02 02:34:15.654876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-02 02:34:15.654888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-02 02:34:15.654898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-02 02:34:15.654907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-02 02:34:15.654917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-02 02:34:15.654926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-02 02:34:15.654938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-02 02:34:15.654949 | orchestrator | 2026-02-02 02:34:15.654960 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:15.654971 | orchestrator | Monday 02 February 2026 02:34:11 +0000 (0:00:00.203) 0:00:20.223 ******* 2026-02-02 02:34:15.654982 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBASmbSyja/QY99PBxnuUP9S1Q4fKcdX5INhiYrDjwV3mgqya1ncrTfM7LeTHRsZfE54K8YKIYPpJBv1CrrJw9fo=) 2026-02-02 02:34:15.655030 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnZ9M0em3zslDY7blgf48kZ04lzzvJel7FN6V0qR2qoJtxS17y4W8sv7a7FmVJBB3TSVaR98ei5fja6L32IWWoMFQExy7QFytSEzgiuV+mMX23maTdJe4XvyC94NVIs28xSeiBNTeLDnpje1OdaZy+08Snf2/cRoj4pF015UYlTK/QC5ncYaWsrMDIMFf6giA9Zp/rbcQsoJcQe6NMpT7zrdJH65zuzD7PuXl8jBFmNOaBhs6AUKarWDoYFmPmIbY8S7ht0aMH2bkkzCFfC87C+K6yDztkZDvjlGV8jdAF969IJV2CYubaqlcTT1/y3smXo3RdnOeLv2ikarPeSciQxj8tD1OuZUtoFyQqx7O/z3O/jSQvE276Aoz1WxYTfMcQRZFptaze8ZmCW1ez35t0/4XGFuM+dUTOlbvK3MzdyA2zYhuyePfFvWqJci5mcH8bRF6JCBspR757qXHNP9TBseruFo3j+xIzIwQK81dy+B0jJEUQ2vi86a49jHllaSk=) 2026-02-02 02:34:15.655052 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC7fm+yE9J+hOqf8beUtP94uKJhsf8Ze3QUAp65x63t8) 2026-02-02 02:34:15.655064 | orchestrator | 2026-02-02 02:34:15.655076 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:15.655088 | orchestrator | Monday 02 February 2026 02:34:12 +0000 (0:00:01.142) 0:00:21.366 ******* 2026-02-02 02:34:15.655103 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJR+ZrgU5rr4lLEwBOn4dPj5OYBzhcmZLbNWt63tR4vHZs0rjdNa1Vgph7GBonP+WzNFkuM7H1uJg7tjFViH4WZOL/xMkcWoHuh0K8rF7JhOQtqVWRp3jTclratrKYot/EWhqgJDir3HlhYyyq63YRa2njw5Zc+Bz5qAu1Z93o0li4ROotOoSzOkcW02C3Kxi0BjeFs2m8jtB1gIJSY5isR4qabLIx+7p7uZxHDfsbUW7phu7fJB8Rv8Ydlg/XOLz29S9Q1MvhCNZhwinXDe0wd7mWTNb/EQcfW/xxwFlT0qLoZd1Usvjl7xccJvSyTp1p/V/j4w7lLFeGTuUMEIbxgoUxetQBfV7V9sbaemZ0T9otneCWHgrBS3HYI21yQ6P5jwM+r+d2eh5d01pbgkdp8xhsuvOOAqPIKufJaTqKeUfZ6MdQfRKtf1qxOy+Q/FWOoKnKQK0ABJcnJvNV16VSHPV3AmuNY+vimaoFO+pNO3kh11SuC9VHF6ht25JdLAU=) 2026-02-02 02:34:15.655113 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHS54ycktksQdOKaK7uDnZdWKxXvpV5s27onAG7Mja15) 2026-02-02 02:34:15.655123 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL3eZfZ3GDgRJPEJrq5OKK1F364ykrqPp1TgtAX7EoAm/630GrXIhG1vurHbq7gJyfXYzGNl7XzGuAEKEXqm4Iw=) 2026-02-02 02:34:15.655132 | orchestrator | 2026-02-02 02:34:15.655142 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:15.655152 | orchestrator | Monday 02 February 2026 02:34:13 +0000 (0:00:01.123) 0:00:22.490 ******* 2026-02-02 02:34:15.655161 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+3FKf+gBj8HTeMBlQ7dhh6Hd59Per9HRgA6ruyzwYEzzI6Rk7eMhf9gBLY+Of1bpmlkAacKWmVvhG5LRhmi2g=) 2026-02-02 02:34:15.655171 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNxZdmlZrPZcRMI2t7qZatknh6fN1QSi1nX2i2lECt62mWU9HgLA2SzFyhjnC2X+V5amjiBCozogI+pPHBSO2VTnu+RfWAzyMYdvuZHI+IzXvE2Vw3DtuKWhUz9D/OI8avylsWV477saGFQZo5KFy2R9/SZyNaLCcxGm70DveUC3LO7WqooRv/LiFMoPuZnU6jauXfiX1cps2W+2tnZ8rBCU8GYgKNZWKwHkWq3kCeccR7i32CZAcOTfbtlA394Pm0heisTlkhKASNgDfjASjiTWWDGWZsVbhJazqIukeI99+ESj26uwJILEsT4s28lRqxC5eUdV0iQ7TprnjdYczsvhXFlzLJLOungueFw1Sy0TYUTcPpdEdqPOVp9tmoh4jUPo8ZJ56z7mgpbpi51GLK/ZltVf3KqYJCziiXd4nIx+ajyLXSvYB23CelKeCDT0IweHF56pUWLluZ1Dt01TDhNPEep7qM0oWKTipP3A+s62fuObB/Y9kkV/MxgkZtous=) 2026-02-02 02:34:15.655181 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDRFpJmpXWgiX78LuAPXrAJq+AbNo+O4VBEHU046klzp) 2026-02-02 02:34:15.655191 | orchestrator | 2026-02-02 02:34:15.655201 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:15.655210 | orchestrator | Monday 02 February 2026 02:34:14 +0000 (0:00:01.159) 0:00:23.649 ******* 2026-02-02 02:34:15.655220 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO9mIqjPPjaqBaXMyETmuFgH67qgpwb8AqPJnqK1jTSbUWcGto0Ej+bVNipwU35GlKIIqsKZdp+nsLL/P+a5L+k=) 2026-02-02 02:34:15.655230 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4LjbaDpjlAPRocIWiSKRfY01N3ZcLfYFm6af5LWc+Ki5PR1vu+hSGHN3szYAdvuBxgGI+lkejIqnN+WmMOFtHO0mNX1jc9SVHPsaTEK3UvDL6rPDp8ip5x7gb1qpKB3YxEajtcLfRoOyPwGNbGJk3ChySmthbKwHJ+Xe/7Xt4ltw1x9IIbBG4cIzIYZ1ja74bRLQcpJhikf4B6n3f9mS+jGptFQ6t9Uk5ashNikDECoBAws1VKAyShvCSsIeOeyCfUm+n/3yEtNMcS/dLiqyyzEC8IxM2vSJoPHtYpNTNrUR0hjGPsJwZLPS4RwRK+hdyisFeAMhcR7CHL0hdx1t+P7AwAVt8M6Ku1drqHyH9McIhgu0/vVAl6CPK8wciQZLQ89Cuc30GLy2Dfz04f6uigtvrC8gscMXcOOKYm7pPrFxrya8MoF8ZZx1GW0fIszc07J/tYhmgw9z7jp1Zabb/n5AHKHrPaPLwxAjD+1EXUsrREJUzHKJPw7j895I15mc=) 2026-02-02 02:34:15.655253 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFVmmZXEcvI+s/NXLJAermcLccEdXtebougwm33/GS+7) 2026-02-02 02:34:20.293983 | orchestrator | 2026-02-02 02:34:20.294149 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:20.294167 | orchestrator | Monday 02 February 2026 02:34:15 +0000 (0:00:01.161) 0:00:24.811 ******* 2026-02-02 02:34:20.294180 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC1Uxy4qxlc9DYx+W66HF2RtDmGf3PA42MOfZRVY8iVSrJSu4RPHvijBLhx3X0ChKNjF2Chsa7hE/pI9xGvHvDk=) 2026-02-02 02:34:20.294194 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC3VHmCbhKvMFhYgOWbVrWYnrjn6uRy3TN8Y1oGCRh53) 2026-02-02 02:34:20.294210 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwNmfB8PnmxeDY0l96nXL3AIPgQmCfzisTSWaviGsvhzTZmy8FATAEi3NIVEBsA+l4gOEVE5Q14IHbzJZlSAwMFCHdanhO68Fey8mw0L7YSW8Pce3uhC0DJD3fa5pgyG8NEkblDu9MHNj1D4MJjGxRbdrC+b3UUvIkesDoNEl8ZooiSRC4AidPuk1+FYa+nqW200G1rRtmW++jlLs44+wcViTZuHxvSVa8cMv5YnlkktDnhUQV358HTqpM3l2i5FqN6NQaTZ3q6Z60fwlh5/cpk+0qSpNQ7wquhCniNKozJbWPsWu2tYXT4BsN77zB7LuCHY7mRuPC05MoDJHRM5TpLLp+jVBvZWy713um+2kD1RlYpY5YWSy+ZqJNjyIWlYHysFC5teTxsuV77AEC7g8JUPPTRu1E+G5U+GcPys5N3apKI8p2Gd0Bxefre4mvNqt5vfzlkZ798Cd4YL8VQivZ/xCgm6RrZa9CS1Zi/oo7gEMWIpD4HIdkB3KChhFLYBk=) 2026-02-02 02:34:20.294224 | orchestrator | 2026-02-02 02:34:20.294235 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:20.294247 | orchestrator | Monday 02 February 2026 02:34:16 +0000 (0:00:01.151) 0:00:25.962 ******* 2026-02-02 02:34:20.294258 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGVxqWKiELi8hPwVl5U9Jmd7RvQuJptXgpXcejzWJTkOlwySsPK3WAGMB12PO829hf7W8L6tfo5T6iGwBO3iqs4=) 2026-02-02 02:34:20.294270 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCN9LGRUeCdQUWhVeb0Q22NWOWr0+10iZZUL82hJ3JyifiromybjFHQ3avaIOnO1ZbDszIlRI2oC3u5uawPm950UCT4oeWF6jpbRtoD6+b6Tb2Osp+PfuAw7BsxHp1TVVHGJdQ1visTjnqqKHSN6LZJd12S975xkaTgNlYehKL6EKABSPzRLV9aIm8j2I3xWm3c/qJJbzUvqbIGXzXFpbt7sqM6HFAHAWTrSoFYmNl9aRfBBuZMznOXn0GIGDiAqtEgUg6urI5YmtcfBvxVojs+FZDwHFE+RD2j4NzI2sjvEHdUGUdZTqGrWajALHTMyCOVWzzKUa8T8o89yF83VW0kLC+UKt8QfCZMFvp7oAdkhmNJQY4mc3hLvHOUaCJ8H+On6YAQmuNwY7jQIBnXSbKOzRMJvAsbWeVT/4QfDP4UxhyA8h4fsQ1BEfpbhmQbidYjc4dJemlkruVvRhd/hXWXf6kzaqFvCHHc8bM1NJCL0Q/Cp6njn0f2Mm/AtYgoss8=) 2026-02-02 02:34:20.294281 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILY9LNosPKJgPGTQZBRupvXTOz/BeRqoX0zeBVu7ZZA8) 2026-02-02 02:34:20.294292 | orchestrator | 2026-02-02 02:34:20.294303 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-02 02:34:20.294317 | orchestrator | Monday 02 February 2026 02:34:17 +0000 (0:00:01.093) 0:00:27.056 ******* 2026-02-02 02:34:20.294447 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTJtj64Nb5A+f8xWH6A5pwYK/x5RH5tqscVQuoTC4QygH3lJ8AAX2Uibujptf78FFuIt5HuyI2fFGPsfIx6emNP9W8K7XdIxEPuwwOWSI7pgfyyvWURLYUQYE0er3HRZD92U86dE8jJtk2ZrUzecShZvqO1xWscDLDPNxzz/lmVWtZiwN+TOkqxUq9q9R0/yJc6i8h+URfjJ51LtKkWikVTLw/H42GI0zHaituXGTvZgcfa0F4FrIN3zk7ae9QZdicgD0Y3nbtOnsFVFSG5ohP0AGBw5Jh2DSrihBmxTUfBPVtFE76lhXtIJLa8MuCT0aLSf8oS+atdGOsx1AoijNQvSzNQPUb2VcSqEqlxWvHtwKjohHkFX1XsaxqFU98Ht4UhOY2wn+v28fT0oJNJkZaZpe3yG+OFDScVOQs04sLUw+XVkasILmmv93WtllslLj6mfIY5fT/P+rkpSSOdHNItKM0zXQEZYXz/JYowd2PPyTPzZm4ZT1DKHuuvlnoVlU=) 2026-02-02 02:34:20.294472 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE7QSXDgNHmg8QZnY9QcBwrRnqP0gHke56yUs6xQXXlPfAtIDTABrJKX110gHr4nTY2B4j+sGrGK3VoDknXL+wg=) 2026-02-02 02:34:20.294491 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHFL5m42Z/B+78y3+G4OOduA8722ZxtC3OqWGs68EwIR) 2026-02-02 02:34:20.294511 | orchestrator | 2026-02-02 02:34:20.294530 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-02 02:34:20.294582 | orchestrator | Monday 02 February 2026 02:34:19 +0000 (0:00:01.129) 0:00:28.186 ******* 2026-02-02 02:34:20.294598 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-02 02:34:20.294611 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-02 02:34:20.294623 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-02 02:34:20.294636 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-02 02:34:20.294649 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-02 02:34:20.294662 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-02 02:34:20.294675 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-02 02:34:20.294688 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:34:20.294701 | orchestrator | 2026-02-02 02:34:20.294735 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-02 02:34:20.294747 | orchestrator | Monday 02 February 2026 02:34:19 +0000 (0:00:00.180) 0:00:28.366 ******* 2026-02-02 02:34:20.294758 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:34:20.294773 | orchestrator | 2026-02-02 02:34:20.294792 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-02 02:34:20.294810 | orchestrator | Monday 02 February 2026 02:34:19 +0000 (0:00:00.060) 0:00:28.427 ******* 2026-02-02 02:34:20.294827 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:34:20.294845 | orchestrator | 2026-02-02 02:34:20.294864 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-02 02:34:20.294884 | orchestrator | Monday 02 February 2026 02:34:19 +0000 (0:00:00.050) 0:00:28.478 ******* 2026-02-02 02:34:20.294903 | orchestrator | changed: [testbed-manager] 2026-02-02 02:34:20.294921 | orchestrator | 2026-02-02 02:34:20.294937 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:34:20.294949 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 02:34:20.294961 | orchestrator | 2026-02-02 02:34:20.294972 | orchestrator | 2026-02-02 02:34:20.294983 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:34:20.294993 | orchestrator | Monday 02 February 2026 02:34:20 +0000 (0:00:00.741) 0:00:29.220 ******* 2026-02-02 02:34:20.295012 | orchestrator | =============================================================================== 2026-02-02 02:34:20.295023 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.12s 2026-02-02 02:34:20.295034 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.60s 2026-02-02 02:34:20.295045 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2026-02-02 02:34:20.295056 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-02-02 02:34:20.295066 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-02-02 02:34:20.295077 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-02 02:34:20.295087 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-02 02:34:20.295098 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-02 02:34:20.295109 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-02 02:34:20.295119 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-02 02:34:20.295130 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-02 02:34:20.295140 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-02 02:34:20.295151 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-02 02:34:20.295162 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-02 02:34:20.295181 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-02 02:34:20.295192 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-02 02:34:20.295202 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.74s 2026-02-02 02:34:20.295213 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-02-02 02:34:20.295224 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-02-02 02:34:20.295235 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-02-02 02:34:20.615986 | orchestrator | + osism apply squid 2026-02-02 02:34:32.768454 | orchestrator | 2026-02-02 02:34:32 | INFO  | Task eb15e018-7296-43a1-97c8-0f8d8eb6ad5f (squid) was prepared for execution. 2026-02-02 02:34:32.768564 | orchestrator | 2026-02-02 02:34:32 | INFO  | It takes a moment until task eb15e018-7296-43a1-97c8-0f8d8eb6ad5f (squid) has been started and output is visible here. 2026-02-02 02:36:49.433625 | orchestrator | 2026-02-02 02:36:49.433733 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-02 02:36:49.433750 | orchestrator | 2026-02-02 02:36:49.433762 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-02 02:36:49.433774 | orchestrator | Monday 02 February 2026 02:34:37 +0000 (0:00:00.203) 0:00:00.203 ******* 2026-02-02 02:36:49.433785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 02:36:49.433798 | orchestrator | 2026-02-02 02:36:49.433809 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-02 02:36:49.433819 | orchestrator | Monday 02 February 2026 02:34:37 +0000 (0:00:00.093) 0:00:00.296 ******* 2026-02-02 02:36:49.433830 | orchestrator | ok: [testbed-manager] 2026-02-02 02:36:49.433842 | orchestrator | 2026-02-02 02:36:49.433853 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-02 02:36:49.433864 | orchestrator | Monday 02 February 2026 02:34:39 +0000 (0:00:01.658) 0:00:01.955 ******* 2026-02-02 02:36:49.433876 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-02 02:36:49.433886 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-02 02:36:49.433897 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-02 02:36:49.433908 | orchestrator | 2026-02-02 02:36:49.433919 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-02 02:36:49.433930 | orchestrator | Monday 02 February 2026 02:34:40 +0000 (0:00:01.169) 0:00:03.124 ******* 2026-02-02 02:36:49.433941 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-02 02:36:49.433951 | orchestrator | 2026-02-02 02:36:49.433962 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-02 02:36:49.433973 | orchestrator | Monday 02 February 2026 02:34:41 +0000 (0:00:01.121) 0:00:04.245 ******* 2026-02-02 02:36:49.433984 | orchestrator | ok: [testbed-manager] 2026-02-02 02:36:49.433994 | orchestrator | 2026-02-02 02:36:49.434005 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-02 02:36:49.434078 | orchestrator | Monday 02 February 2026 02:34:41 +0000 (0:00:00.367) 0:00:04.613 ******* 2026-02-02 02:36:49.434093 | orchestrator | changed: [testbed-manager] 2026-02-02 02:36:49.434105 | orchestrator | 2026-02-02 02:36:49.434116 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-02 02:36:49.434127 | orchestrator | Monday 02 February 2026 02:34:42 +0000 (0:00:00.936) 0:00:05.550 ******* 2026-02-02 02:36:49.434137 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-02 02:36:49.434153 | orchestrator | ok: [testbed-manager] 2026-02-02 02:36:49.434165 | orchestrator | 2026-02-02 02:36:49.434179 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-02 02:36:49.434220 | orchestrator | Monday 02 February 2026 02:35:32 +0000 (0:00:49.953) 0:00:55.503 ******* 2026-02-02 02:36:49.434234 | orchestrator | changed: [testbed-manager] 2026-02-02 02:36:49.434246 | orchestrator | 2026-02-02 02:36:49.434260 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-02 02:36:49.434273 | orchestrator | Monday 02 February 2026 02:35:48 +0000 (0:00:15.777) 0:01:11.281 ******* 2026-02-02 02:36:49.434285 | orchestrator | Pausing for 60 seconds 2026-02-02 02:36:49.434298 | orchestrator | changed: [testbed-manager] 2026-02-02 02:36:49.434311 | orchestrator | 2026-02-02 02:36:49.434324 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-02 02:36:49.434336 | orchestrator | Monday 02 February 2026 02:36:48 +0000 (0:01:00.090) 0:02:11.371 ******* 2026-02-02 02:36:49.434350 | orchestrator | ok: [testbed-manager] 2026-02-02 02:36:49.434363 | orchestrator | 2026-02-02 02:36:49.434376 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-02 02:36:49.434388 | orchestrator | Monday 02 February 2026 02:36:48 +0000 (0:00:00.083) 0:02:11.455 ******* 2026-02-02 02:36:49.434499 | orchestrator | changed: [testbed-manager] 2026-02-02 02:36:49.434523 | orchestrator | 2026-02-02 02:36:49.434535 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:36:49.434546 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:36:49.434557 | orchestrator | 2026-02-02 02:36:49.434568 | orchestrator | 2026-02-02 02:36:49.434579 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:36:49.434590 | orchestrator | Monday 02 February 2026 02:36:49 +0000 (0:00:00.636) 0:02:12.092 ******* 2026-02-02 02:36:49.434608 | orchestrator | =============================================================================== 2026-02-02 02:36:49.434655 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-02 02:36:49.434677 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 49.95s 2026-02-02 02:36:49.434696 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.78s 2026-02-02 02:36:49.434714 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.66s 2026-02-02 02:36:49.434732 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.17s 2026-02-02 02:36:49.434749 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2026-02-02 02:36:49.434765 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.94s 2026-02-02 02:36:49.434782 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2026-02-02 02:36:49.434800 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-02-02 02:36:49.434818 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-02-02 02:36:49.434838 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-02-02 02:36:49.787614 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-02 02:36:49.787703 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-02 02:36:49.836550 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-02 02:36:49.836651 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-02 02:36:49.841742 | orchestrator | + set -e 2026-02-02 02:36:49.841802 | orchestrator | + NAMESPACE=kolla/release 2026-02-02 02:36:49.841818 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-02 02:36:49.848154 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-02 02:36:49.916742 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-02 02:36:49.917354 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-02 02:37:02.086323 | orchestrator | 2026-02-02 02:37:02 | INFO  | Task ebc7bf55-41db-4a07-8dd7-5d438db8edfe (operator) was prepared for execution. 2026-02-02 02:37:02.086514 | orchestrator | 2026-02-02 02:37:02 | INFO  | It takes a moment until task ebc7bf55-41db-4a07-8dd7-5d438db8edfe (operator) has been started and output is visible here. 2026-02-02 02:37:17.946657 | orchestrator | 2026-02-02 02:37:17.946785 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-02 02:37:17.946804 | orchestrator | 2026-02-02 02:37:17.946816 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 02:37:17.946828 | orchestrator | Monday 02 February 2026 02:37:06 +0000 (0:00:00.149) 0:00:00.149 ******* 2026-02-02 02:37:17.946839 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:37:17.946850 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:37:17.946861 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:37:17.946871 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:37:17.946882 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:37:17.946892 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:37:17.946903 | orchestrator | 2026-02-02 02:37:17.946914 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-02 02:37:17.946924 | orchestrator | Monday 02 February 2026 02:37:09 +0000 (0:00:03.224) 0:00:03.373 ******* 2026-02-02 02:37:17.946935 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:37:17.946945 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:37:17.946956 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:37:17.946966 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:37:17.946977 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:37:17.946987 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:37:17.946998 | orchestrator | 2026-02-02 02:37:17.947009 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-02 02:37:17.947019 | orchestrator | 2026-02-02 02:37:17.947030 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-02 02:37:17.947041 | orchestrator | Monday 02 February 2026 02:37:10 +0000 (0:00:00.752) 0:00:04.126 ******* 2026-02-02 02:37:17.947052 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:37:17.947062 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:37:17.947073 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:37:17.947083 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:37:17.947094 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:37:17.947105 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:37:17.947116 | orchestrator | 2026-02-02 02:37:17.947127 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-02 02:37:17.947155 | orchestrator | Monday 02 February 2026 02:37:10 +0000 (0:00:00.182) 0:00:04.309 ******* 2026-02-02 02:37:17.947167 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:37:17.947178 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:37:17.947190 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:37:17.947202 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:37:17.947219 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:37:17.947237 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:37:17.947254 | orchestrator | 2026-02-02 02:37:17.947272 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-02 02:37:17.947292 | orchestrator | Monday 02 February 2026 02:37:10 +0000 (0:00:00.187) 0:00:04.496 ******* 2026-02-02 02:37:17.947311 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:37:17.947332 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:37:17.947351 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:37:17.947374 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:37:17.947395 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:37:17.947489 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:37:17.947517 | orchestrator | 2026-02-02 02:37:17.947540 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-02 02:37:17.947559 | orchestrator | Monday 02 February 2026 02:37:11 +0000 (0:00:00.642) 0:00:05.139 ******* 2026-02-02 02:37:17.947577 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:37:17.947595 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:37:17.947614 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:37:17.947632 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:37:17.947650 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:37:17.947667 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:37:17.947718 | orchestrator | 2026-02-02 02:37:17.947738 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-02 02:37:17.947756 | orchestrator | Monday 02 February 2026 02:37:12 +0000 (0:00:00.767) 0:00:05.906 ******* 2026-02-02 02:37:17.947773 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-02 02:37:17.947790 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-02 02:37:17.947808 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-02 02:37:17.947826 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-02 02:37:17.947846 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-02 02:37:17.947864 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-02 02:37:17.947882 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-02 02:37:17.947901 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-02 02:37:17.947920 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-02 02:37:17.947939 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-02 02:37:17.947957 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-02 02:37:17.947974 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-02 02:37:17.947993 | orchestrator | 2026-02-02 02:37:17.948010 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-02 02:37:17.948028 | orchestrator | Monday 02 February 2026 02:37:13 +0000 (0:00:01.172) 0:00:07.079 ******* 2026-02-02 02:37:17.948047 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:37:17.948065 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:37:17.948083 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:37:17.948102 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:37:17.948120 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:37:17.948136 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:37:17.948147 | orchestrator | 2026-02-02 02:37:17.948158 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-02 02:37:17.948170 | orchestrator | Monday 02 February 2026 02:37:14 +0000 (0:00:01.239) 0:00:08.318 ******* 2026-02-02 02:37:17.948181 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-02 02:37:17.948192 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-02 02:37:17.948202 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-02 02:37:17.948213 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 02:37:17.948247 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 02:37:17.948258 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 02:37:17.948269 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 02:37:17.948279 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 02:37:17.948290 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-02 02:37:17.948300 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-02 02:37:17.948311 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-02 02:37:17.948321 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-02 02:37:17.948332 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-02 02:37:17.948342 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-02 02:37:17.948352 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-02 02:37:17.948363 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-02 02:37:17.948373 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-02 02:37:17.948383 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-02 02:37:17.948394 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-02 02:37:17.948404 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-02 02:37:17.948493 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-02 02:37:17.948515 | orchestrator | 2026-02-02 02:37:17.948531 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-02 02:37:17.948542 | orchestrator | Monday 02 February 2026 02:37:15 +0000 (0:00:01.215) 0:00:09.534 ******* 2026-02-02 02:37:17.948553 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:37:17.948564 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:37:17.948574 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:37:17.948585 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:37:17.948596 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:37:17.948606 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:37:17.948617 | orchestrator | 2026-02-02 02:37:17.948628 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-02 02:37:17.948639 | orchestrator | Monday 02 February 2026 02:37:15 +0000 (0:00:00.162) 0:00:09.697 ******* 2026-02-02 02:37:17.948649 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:37:17.948660 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:37:17.948671 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:37:17.948681 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:37:17.948691 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:37:17.948702 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:37:17.948713 | orchestrator | 2026-02-02 02:37:17.948723 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-02 02:37:17.948734 | orchestrator | Monday 02 February 2026 02:37:16 +0000 (0:00:00.199) 0:00:09.896 ******* 2026-02-02 02:37:17.948745 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:37:17.948755 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:37:17.948766 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:37:17.948776 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:37:17.948787 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:37:17.948797 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:37:17.948808 | orchestrator | 2026-02-02 02:37:17.948818 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-02 02:37:17.948829 | orchestrator | Monday 02 February 2026 02:37:16 +0000 (0:00:00.616) 0:00:10.513 ******* 2026-02-02 02:37:17.948840 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:37:17.948850 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:37:17.948861 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:37:17.948871 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:37:17.948895 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:37:17.948907 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:37:17.948917 | orchestrator | 2026-02-02 02:37:17.948928 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-02 02:37:17.948939 | orchestrator | Monday 02 February 2026 02:37:16 +0000 (0:00:00.186) 0:00:10.700 ******* 2026-02-02 02:37:17.948949 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 02:37:17.948960 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:37:17.948971 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-02 02:37:17.948981 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:37:17.948992 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 02:37:17.949002 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 02:37:17.949013 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-02 02:37:17.949023 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:37:17.949034 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:37:17.949044 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:37:17.949055 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 02:37:17.949065 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:37:17.949076 | orchestrator | 2026-02-02 02:37:17.949087 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-02 02:37:17.949097 | orchestrator | Monday 02 February 2026 02:37:17 +0000 (0:00:00.704) 0:00:11.404 ******* 2026-02-02 02:37:17.949116 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:37:17.949127 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:37:17.949137 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:37:17.949148 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:37:17.949158 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:37:17.949168 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:37:17.949179 | orchestrator | 2026-02-02 02:37:17.949190 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-02 02:37:17.949200 | orchestrator | Monday 02 February 2026 02:37:17 +0000 (0:00:00.168) 0:00:11.572 ******* 2026-02-02 02:37:17.949211 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:37:17.949222 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:37:17.949232 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:37:17.949242 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:37:17.949262 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:37:19.331399 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:37:19.331624 | orchestrator | 2026-02-02 02:37:19.331645 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-02 02:37:19.331658 | orchestrator | Monday 02 February 2026 02:37:17 +0000 (0:00:00.176) 0:00:11.748 ******* 2026-02-02 02:37:19.331670 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:37:19.331720 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:37:19.331733 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:37:19.331744 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:37:19.331755 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:37:19.331766 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:37:19.331776 | orchestrator | 2026-02-02 02:37:19.331787 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-02 02:37:19.331798 | orchestrator | Monday 02 February 2026 02:37:18 +0000 (0:00:00.174) 0:00:11.923 ******* 2026-02-02 02:37:19.331809 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:37:19.331820 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:37:19.331831 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:37:19.331841 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:37:19.331852 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:37:19.331863 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:37:19.331873 | orchestrator | 2026-02-02 02:37:19.331884 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-02 02:37:19.331895 | orchestrator | Monday 02 February 2026 02:37:18 +0000 (0:00:00.682) 0:00:12.605 ******* 2026-02-02 02:37:19.331905 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:37:19.331916 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:37:19.331927 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:37:19.331940 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:37:19.331952 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:37:19.331964 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:37:19.331976 | orchestrator | 2026-02-02 02:37:19.331989 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:37:19.332020 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 02:37:19.332035 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 02:37:19.332048 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 02:37:19.332061 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 02:37:19.332074 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 02:37:19.332107 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 02:37:19.332120 | orchestrator | 2026-02-02 02:37:19.332134 | orchestrator | 2026-02-02 02:37:19.332147 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:37:19.332159 | orchestrator | Monday 02 February 2026 02:37:19 +0000 (0:00:00.253) 0:00:12.859 ******* 2026-02-02 02:37:19.332171 | orchestrator | =============================================================================== 2026-02-02 02:37:19.332197 | orchestrator | Gathering Facts --------------------------------------------------------- 3.22s 2026-02-02 02:37:19.332220 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.24s 2026-02-02 02:37:19.332231 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.22s 2026-02-02 02:37:19.332243 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.17s 2026-02-02 02:37:19.332254 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2026-02-02 02:37:19.332264 | orchestrator | Do not require tty for all users ---------------------------------------- 0.75s 2026-02-02 02:37:19.332275 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2026-02-02 02:37:19.332286 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.68s 2026-02-02 02:37:19.332296 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2026-02-02 02:37:19.332307 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2026-02-02 02:37:19.332318 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-02-02 02:37:19.332328 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-02-02 02:37:19.332339 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2026-02-02 02:37:19.332350 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2026-02-02 02:37:19.332361 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2026-02-02 02:37:19.332371 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2026-02-02 02:37:19.332382 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-02-02 02:37:19.332393 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-02-02 02:37:19.332429 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-02-02 02:37:19.680407 | orchestrator | + osism apply --environment custom facts 2026-02-02 02:37:21.766527 | orchestrator | 2026-02-02 02:37:21 | INFO  | Trying to run play facts in environment custom 2026-02-02 02:37:31.956567 | orchestrator | 2026-02-02 02:37:31 | INFO  | Task 14613875-b269-41db-9cda-0bea4bb62f07 (facts) was prepared for execution. 2026-02-02 02:37:31.956682 | orchestrator | 2026-02-02 02:37:31 | INFO  | It takes a moment until task 14613875-b269-41db-9cda-0bea4bb62f07 (facts) has been started and output is visible here. 2026-02-02 02:38:14.385279 | orchestrator | 2026-02-02 02:38:14.385371 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-02 02:38:14.385382 | orchestrator | 2026-02-02 02:38:14.385388 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-02 02:38:14.385395 | orchestrator | Monday 02 February 2026 02:37:36 +0000 (0:00:00.088) 0:00:00.088 ******* 2026-02-02 02:38:14.385402 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:14.385410 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:38:14.385419 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:38:14.385425 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:38:14.385431 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:38:14.385480 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:38:14.385504 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:38:14.385508 | orchestrator | 2026-02-02 02:38:14.385513 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-02 02:38:14.385517 | orchestrator | Monday 02 February 2026 02:37:37 +0000 (0:00:01.450) 0:00:01.539 ******* 2026-02-02 02:38:14.385520 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:14.385524 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:38:14.385528 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:38:14.385532 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:38:14.385536 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:38:14.385539 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:38:14.385543 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:38:14.385547 | orchestrator | 2026-02-02 02:38:14.385551 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-02 02:38:14.385554 | orchestrator | 2026-02-02 02:38:14.385558 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-02 02:38:14.385562 | orchestrator | Monday 02 February 2026 02:37:38 +0000 (0:00:01.242) 0:00:02.781 ******* 2026-02-02 02:38:14.385566 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:14.385570 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:14.385574 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:14.385578 | orchestrator | 2026-02-02 02:38:14.385582 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-02 02:38:14.385587 | orchestrator | Monday 02 February 2026 02:37:39 +0000 (0:00:00.130) 0:00:02.912 ******* 2026-02-02 02:38:14.385591 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:14.385595 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:14.385598 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:14.385602 | orchestrator | 2026-02-02 02:38:14.385606 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-02 02:38:14.385609 | orchestrator | Monday 02 February 2026 02:37:39 +0000 (0:00:00.204) 0:00:03.117 ******* 2026-02-02 02:38:14.385613 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:14.385617 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:14.385621 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:14.385624 | orchestrator | 2026-02-02 02:38:14.385628 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-02 02:38:14.385632 | orchestrator | Monday 02 February 2026 02:37:39 +0000 (0:00:00.238) 0:00:03.355 ******* 2026-02-02 02:38:14.385637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 02:38:14.385642 | orchestrator | 2026-02-02 02:38:14.385646 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-02 02:38:14.385650 | orchestrator | Monday 02 February 2026 02:37:39 +0000 (0:00:00.163) 0:00:03.519 ******* 2026-02-02 02:38:14.385653 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:14.385657 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:14.385661 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:14.385664 | orchestrator | 2026-02-02 02:38:14.385668 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-02 02:38:14.385672 | orchestrator | Monday 02 February 2026 02:37:40 +0000 (0:00:00.429) 0:00:03.948 ******* 2026-02-02 02:38:14.385676 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:38:14.385680 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:38:14.385683 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:38:14.385687 | orchestrator | 2026-02-02 02:38:14.385691 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-02 02:38:14.385695 | orchestrator | Monday 02 February 2026 02:37:40 +0000 (0:00:00.140) 0:00:04.089 ******* 2026-02-02 02:38:14.385698 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:38:14.385702 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:38:14.385706 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:38:14.385717 | orchestrator | 2026-02-02 02:38:14.385721 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-02 02:38:14.385728 | orchestrator | Monday 02 February 2026 02:37:41 +0000 (0:00:01.017) 0:00:05.107 ******* 2026-02-02 02:38:14.385732 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:14.385736 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:14.385739 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:14.385743 | orchestrator | 2026-02-02 02:38:14.385747 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-02 02:38:14.385781 | orchestrator | Monday 02 February 2026 02:37:41 +0000 (0:00:00.462) 0:00:05.569 ******* 2026-02-02 02:38:14.385785 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:38:14.385789 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:38:14.385793 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:38:14.385796 | orchestrator | 2026-02-02 02:38:14.385800 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-02 02:38:14.385804 | orchestrator | Monday 02 February 2026 02:37:42 +0000 (0:00:01.040) 0:00:06.609 ******* 2026-02-02 02:38:14.385814 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:38:14.385818 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:38:14.385822 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:38:14.385826 | orchestrator | 2026-02-02 02:38:14.385830 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-02 02:38:14.385839 | orchestrator | Monday 02 February 2026 02:37:58 +0000 (0:00:15.369) 0:00:21.979 ******* 2026-02-02 02:38:14.385843 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:38:14.385847 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:38:14.385852 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:38:14.385856 | orchestrator | 2026-02-02 02:38:14.385861 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-02 02:38:14.385877 | orchestrator | Monday 02 February 2026 02:37:58 +0000 (0:00:00.108) 0:00:22.088 ******* 2026-02-02 02:38:14.385882 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:38:14.385886 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:38:14.385890 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:38:14.385895 | orchestrator | 2026-02-02 02:38:14.385899 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-02 02:38:14.385903 | orchestrator | Monday 02 February 2026 02:38:05 +0000 (0:00:07.172) 0:00:29.261 ******* 2026-02-02 02:38:14.385908 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:14.385913 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:14.385920 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:14.385927 | orchestrator | 2026-02-02 02:38:14.385932 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-02 02:38:14.385936 | orchestrator | Monday 02 February 2026 02:38:05 +0000 (0:00:00.460) 0:00:29.721 ******* 2026-02-02 02:38:14.385942 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-02 02:38:14.385949 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-02 02:38:14.385953 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-02 02:38:14.385958 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-02 02:38:14.385965 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-02 02:38:14.385969 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-02 02:38:14.385974 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-02 02:38:14.385977 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-02 02:38:14.385981 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-02 02:38:14.385985 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-02 02:38:14.385989 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-02 02:38:14.385992 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-02 02:38:14.385996 | orchestrator | 2026-02-02 02:38:14.386000 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-02 02:38:14.386008 | orchestrator | Monday 02 February 2026 02:38:09 +0000 (0:00:03.506) 0:00:33.227 ******* 2026-02-02 02:38:14.386011 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:14.386058 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:14.386063 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:14.386066 | orchestrator | 2026-02-02 02:38:14.386070 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-02 02:38:14.386074 | orchestrator | 2026-02-02 02:38:14.386078 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 02:38:14.386082 | orchestrator | Monday 02 February 2026 02:38:10 +0000 (0:00:01.370) 0:00:34.598 ******* 2026-02-02 02:38:14.386085 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:38:14.386089 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:38:14.386093 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:38:14.386097 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:14.386101 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:14.386105 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:14.386108 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:14.386112 | orchestrator | 2026-02-02 02:38:14.386116 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:38:14.386121 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:38:14.386125 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:38:14.386130 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:38:14.386134 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:38:14.386138 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:38:14.386143 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:38:14.386149 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:38:14.386155 | orchestrator | 2026-02-02 02:38:14.386162 | orchestrator | 2026-02-02 02:38:14.386168 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:38:14.386174 | orchestrator | Monday 02 February 2026 02:38:14 +0000 (0:00:03.567) 0:00:38.165 ******* 2026-02-02 02:38:14.386180 | orchestrator | =============================================================================== 2026-02-02 02:38:14.386186 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.37s 2026-02-02 02:38:14.386192 | orchestrator | Install required packages (Debian) -------------------------------------- 7.17s 2026-02-02 02:38:14.386198 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.57s 2026-02-02 02:38:14.386204 | orchestrator | Copy fact files --------------------------------------------------------- 3.51s 2026-02-02 02:38:14.386211 | orchestrator | Create custom facts directory ------------------------------------------- 1.45s 2026-02-02 02:38:14.386216 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.37s 2026-02-02 02:38:14.386224 | orchestrator | Copy fact file ---------------------------------------------------------- 1.24s 2026-02-02 02:38:14.655006 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2026-02-02 02:38:14.655086 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.02s 2026-02-02 02:38:14.655095 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-02-02 02:38:14.655124 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-02-02 02:38:14.655131 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-02-02 02:38:14.655137 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-02-02 02:38:14.655143 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-02-02 02:38:14.655149 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-02-02 02:38:14.655156 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-02-02 02:38:14.655163 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2026-02-02 02:38:14.655181 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-02-02 02:38:14.980026 | orchestrator | + osism apply bootstrap 2026-02-02 02:38:27.173894 | orchestrator | 2026-02-02 02:38:27 | INFO  | Task 79d49266-009c-4663-927b-9d6ee731c38e (bootstrap) was prepared for execution. 2026-02-02 02:38:27.173991 | orchestrator | 2026-02-02 02:38:27 | INFO  | It takes a moment until task 79d49266-009c-4663-927b-9d6ee731c38e (bootstrap) has been started and output is visible here. 2026-02-02 02:38:43.273979 | orchestrator | 2026-02-02 02:38:43.274154 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-02 02:38:43.274178 | orchestrator | 2026-02-02 02:38:43.274190 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-02 02:38:43.274202 | orchestrator | Monday 02 February 2026 02:38:31 +0000 (0:00:00.158) 0:00:00.158 ******* 2026-02-02 02:38:43.274213 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:43.274225 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:43.274235 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:43.274246 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:43.274257 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:38:43.274268 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:38:43.274278 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:38:43.274289 | orchestrator | 2026-02-02 02:38:43.274301 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-02 02:38:43.274311 | orchestrator | 2026-02-02 02:38:43.274322 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 02:38:43.274333 | orchestrator | Monday 02 February 2026 02:38:31 +0000 (0:00:00.320) 0:00:00.478 ******* 2026-02-02 02:38:43.274344 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:38:43.274355 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:38:43.274366 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:38:43.274377 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:43.274388 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:43.274399 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:43.274409 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:43.274420 | orchestrator | 2026-02-02 02:38:43.274431 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-02 02:38:43.274442 | orchestrator | 2026-02-02 02:38:43.274453 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 02:38:43.274531 | orchestrator | Monday 02 February 2026 02:38:35 +0000 (0:00:03.507) 0:00:03.985 ******* 2026-02-02 02:38:43.274544 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-02 02:38:43.274555 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-02 02:38:43.274566 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-02 02:38:43.274577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-02 02:38:43.274587 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-02 02:38:43.274598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 02:38:43.274609 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-02 02:38:43.274620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 02:38:43.274631 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-02 02:38:43.274665 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 02:38:43.274677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 02:38:43.274688 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-02 02:38:43.274698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 02:38:43.274709 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-02 02:38:43.274720 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 02:38:43.274731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 02:38:43.274742 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 02:38:43.274753 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 02:38:43.274764 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:38:43.274775 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-02 02:38:43.274786 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-02 02:38:43.274796 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:38:43.274807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 02:38:43.274818 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 02:38:43.274829 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 02:38:43.274839 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 02:38:43.274850 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-02 02:38:43.274861 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-02 02:38:43.274871 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 02:38:43.274882 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-02 02:38:43.274893 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-02 02:38:43.274903 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 02:38:43.274914 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-02 02:38:43.274924 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-02 02:38:43.274935 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-02 02:38:43.274946 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 02:38:43.274956 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:38:43.274967 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-02 02:38:43.274978 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:38:43.274989 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-02 02:38:43.275000 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-02 02:38:43.275011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 02:38:43.275022 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-02 02:38:43.275032 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-02 02:38:43.275043 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 02:38:43.275054 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:38:43.275065 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 02:38:43.275093 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 02:38:43.275105 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-02 02:38:43.275116 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 02:38:43.275142 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:38:43.275154 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-02 02:38:43.275165 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 02:38:43.275175 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 02:38:43.275193 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 02:38:43.275204 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:38:43.275215 | orchestrator | 2026-02-02 02:38:43.275226 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-02 02:38:43.275237 | orchestrator | 2026-02-02 02:38:43.275248 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-02 02:38:43.275259 | orchestrator | Monday 02 February 2026 02:38:35 +0000 (0:00:00.456) 0:00:04.442 ******* 2026-02-02 02:38:43.275270 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:38:43.275280 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:43.275291 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:43.275302 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:43.275313 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:38:43.275323 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:38:43.275334 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:43.275345 | orchestrator | 2026-02-02 02:38:43.275356 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-02 02:38:43.275367 | orchestrator | Monday 02 February 2026 02:38:37 +0000 (0:00:01.136) 0:00:05.578 ******* 2026-02-02 02:38:43.275378 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:43.275389 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:43.275399 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:43.275410 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:38:43.275420 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:43.275431 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:38:43.275441 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:38:43.275452 | orchestrator | 2026-02-02 02:38:43.275482 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-02 02:38:43.275493 | orchestrator | Monday 02 February 2026 02:38:38 +0000 (0:00:01.275) 0:00:06.853 ******* 2026-02-02 02:38:43.275524 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:38:43.275537 | orchestrator | 2026-02-02 02:38:43.275548 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-02 02:38:43.275559 | orchestrator | Monday 02 February 2026 02:38:38 +0000 (0:00:00.307) 0:00:07.161 ******* 2026-02-02 02:38:43.275570 | orchestrator | changed: [testbed-manager] 2026-02-02 02:38:43.275581 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:38:43.275592 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:38:43.275603 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:38:43.275613 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:38:43.275624 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:38:43.275635 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:38:43.275646 | orchestrator | 2026-02-02 02:38:43.275656 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-02 02:38:43.275667 | orchestrator | Monday 02 February 2026 02:38:40 +0000 (0:00:02.192) 0:00:09.354 ******* 2026-02-02 02:38:43.275678 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:38:43.275690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:38:43.275705 | orchestrator | 2026-02-02 02:38:43.275724 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-02 02:38:43.275743 | orchestrator | Monday 02 February 2026 02:38:41 +0000 (0:00:00.297) 0:00:09.652 ******* 2026-02-02 02:38:43.275762 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:38:43.275780 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:38:43.275799 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:38:43.275817 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:38:43.275835 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:38:43.275853 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:38:43.275883 | orchestrator | 2026-02-02 02:38:43.275902 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-02 02:38:43.275920 | orchestrator | Monday 02 February 2026 02:38:42 +0000 (0:00:01.011) 0:00:10.663 ******* 2026-02-02 02:38:43.275939 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:38:43.275958 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:38:43.275976 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:38:43.275995 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:38:43.276014 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:38:43.276025 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:38:43.276035 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:38:43.276046 | orchestrator | 2026-02-02 02:38:43.276057 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-02 02:38:43.276068 | orchestrator | Monday 02 February 2026 02:38:42 +0000 (0:00:00.576) 0:00:11.240 ******* 2026-02-02 02:38:43.276079 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:38:43.276089 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:38:43.276100 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:38:43.276116 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:38:43.276128 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:38:43.276138 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:38:43.276149 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:43.276160 | orchestrator | 2026-02-02 02:38:43.276170 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-02 02:38:43.276182 | orchestrator | Monday 02 February 2026 02:38:43 +0000 (0:00:00.394) 0:00:11.634 ******* 2026-02-02 02:38:43.276192 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:38:43.276203 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:38:43.276224 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:38:54.639250 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:38:54.639343 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:38:54.639353 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:38:54.639360 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:38:54.639367 | orchestrator | 2026-02-02 02:38:54.639374 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-02 02:38:54.639383 | orchestrator | Monday 02 February 2026 02:38:43 +0000 (0:00:00.184) 0:00:11.819 ******* 2026-02-02 02:38:54.639391 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:38:54.639410 | orchestrator | 2026-02-02 02:38:54.639416 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-02 02:38:54.639423 | orchestrator | Monday 02 February 2026 02:38:43 +0000 (0:00:00.274) 0:00:12.093 ******* 2026-02-02 02:38:54.639430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:38:54.639437 | orchestrator | 2026-02-02 02:38:54.639443 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-02 02:38:54.639450 | orchestrator | Monday 02 February 2026 02:38:43 +0000 (0:00:00.267) 0:00:12.361 ******* 2026-02-02 02:38:54.639456 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:54.639461 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:54.639483 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:38:54.639487 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:38:54.639492 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:54.639496 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:38:54.639500 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:54.639504 | orchestrator | 2026-02-02 02:38:54.639508 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-02 02:38:54.639512 | orchestrator | Monday 02 February 2026 02:38:45 +0000 (0:00:01.293) 0:00:13.654 ******* 2026-02-02 02:38:54.639533 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:38:54.639537 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:38:54.639541 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:38:54.639545 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:38:54.639549 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:38:54.639552 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:38:54.639556 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:38:54.639560 | orchestrator | 2026-02-02 02:38:54.639564 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-02 02:38:54.639567 | orchestrator | Monday 02 February 2026 02:38:45 +0000 (0:00:00.259) 0:00:13.914 ******* 2026-02-02 02:38:54.639571 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:54.639575 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:54.639579 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:54.639582 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:54.639586 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:38:54.639590 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:38:54.639593 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:38:54.639597 | orchestrator | 2026-02-02 02:38:54.639601 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-02 02:38:54.639605 | orchestrator | Monday 02 February 2026 02:38:45 +0000 (0:00:00.490) 0:00:14.404 ******* 2026-02-02 02:38:54.639609 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:38:54.639612 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:38:54.639616 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:38:54.639620 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:38:54.639624 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:38:54.639627 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:38:54.639631 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:38:54.639635 | orchestrator | 2026-02-02 02:38:54.639639 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-02 02:38:54.639644 | orchestrator | Monday 02 February 2026 02:38:46 +0000 (0:00:00.245) 0:00:14.649 ******* 2026-02-02 02:38:54.639647 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:54.639651 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:38:54.639655 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:38:54.639659 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:38:54.639662 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:38:54.639666 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:38:54.639670 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:38:54.639673 | orchestrator | 2026-02-02 02:38:54.639677 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-02 02:38:54.639681 | orchestrator | Monday 02 February 2026 02:38:46 +0000 (0:00:00.482) 0:00:15.132 ******* 2026-02-02 02:38:54.639685 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:54.639688 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:38:54.639692 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:38:54.639696 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:38:54.639701 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:38:54.639708 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:38:54.639717 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:38:54.639724 | orchestrator | 2026-02-02 02:38:54.639729 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-02 02:38:54.639736 | orchestrator | Monday 02 February 2026 02:38:47 +0000 (0:00:01.072) 0:00:16.204 ******* 2026-02-02 02:38:54.639741 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:54.639755 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:54.639761 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:38:54.639766 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:54.639772 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:54.639777 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:38:54.639783 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:38:54.639789 | orchestrator | 2026-02-02 02:38:54.639795 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-02 02:38:54.639806 | orchestrator | Monday 02 February 2026 02:38:48 +0000 (0:00:00.983) 0:00:17.187 ******* 2026-02-02 02:38:54.639828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:38:54.639836 | orchestrator | 2026-02-02 02:38:54.639842 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-02 02:38:54.639849 | orchestrator | Monday 02 February 2026 02:38:48 +0000 (0:00:00.270) 0:00:17.458 ******* 2026-02-02 02:38:54.639854 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:38:54.639860 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:38:54.639866 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:38:54.639872 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:38:54.639878 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:38:54.639884 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:38:54.639891 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:38:54.639897 | orchestrator | 2026-02-02 02:38:54.639903 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-02 02:38:54.639910 | orchestrator | Monday 02 February 2026 02:38:50 +0000 (0:00:01.144) 0:00:18.603 ******* 2026-02-02 02:38:54.639916 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:54.639922 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:54.639929 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:54.639935 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:54.639941 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:38:54.639947 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:38:54.639954 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:38:54.639960 | orchestrator | 2026-02-02 02:38:54.639967 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-02 02:38:54.639973 | orchestrator | Monday 02 February 2026 02:38:50 +0000 (0:00:00.191) 0:00:18.794 ******* 2026-02-02 02:38:54.639979 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:54.639985 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:54.639991 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:54.639997 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:54.640004 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:38:54.640010 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:38:54.640015 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:38:54.640022 | orchestrator | 2026-02-02 02:38:54.640028 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-02 02:38:54.640035 | orchestrator | Monday 02 February 2026 02:38:50 +0000 (0:00:00.204) 0:00:18.998 ******* 2026-02-02 02:38:54.640041 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:54.640047 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:54.640054 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:54.640060 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:54.640066 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:38:54.640072 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:38:54.640079 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:38:54.640085 | orchestrator | 2026-02-02 02:38:54.640091 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-02 02:38:54.640098 | orchestrator | Monday 02 February 2026 02:38:50 +0000 (0:00:00.188) 0:00:19.186 ******* 2026-02-02 02:38:54.640108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:38:54.640116 | orchestrator | 2026-02-02 02:38:54.640123 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-02 02:38:54.640128 | orchestrator | Monday 02 February 2026 02:38:50 +0000 (0:00:00.284) 0:00:19.471 ******* 2026-02-02 02:38:54.640134 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:54.640141 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:54.640157 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:54.640163 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:54.640169 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:38:54.640176 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:38:54.640182 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:38:54.640188 | orchestrator | 2026-02-02 02:38:54.640194 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-02 02:38:54.640200 | orchestrator | Monday 02 February 2026 02:38:51 +0000 (0:00:00.488) 0:00:19.959 ******* 2026-02-02 02:38:54.640206 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:38:54.640213 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:38:54.640219 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:38:54.640226 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:38:54.640232 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:38:54.640239 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:38:54.640245 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:38:54.640251 | orchestrator | 2026-02-02 02:38:54.640258 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-02 02:38:54.640264 | orchestrator | Monday 02 February 2026 02:38:51 +0000 (0:00:00.269) 0:00:20.228 ******* 2026-02-02 02:38:54.640271 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:54.640277 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:54.640283 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:54.640289 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:54.640295 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:38:54.640299 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:38:54.640303 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:38:54.640306 | orchestrator | 2026-02-02 02:38:54.640310 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-02 02:38:54.640314 | orchestrator | Monday 02 February 2026 02:38:52 +0000 (0:00:01.007) 0:00:21.236 ******* 2026-02-02 02:38:54.640318 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:54.640321 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:54.640325 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:54.640329 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:54.640333 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:38:54.640344 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:38:54.640348 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:38:54.640351 | orchestrator | 2026-02-02 02:38:54.640355 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-02 02:38:54.640359 | orchestrator | Monday 02 February 2026 02:38:53 +0000 (0:00:00.544) 0:00:21.780 ******* 2026-02-02 02:38:54.640363 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:38:54.640367 | orchestrator | ok: [testbed-manager] 2026-02-02 02:38:54.640370 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:38:54.640374 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:38:54.640385 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:39:33.146100 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:39:33.146205 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:39:33.146231 | orchestrator | 2026-02-02 02:39:33.146253 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-02 02:39:33.146277 | orchestrator | Monday 02 February 2026 02:38:54 +0000 (0:00:01.321) 0:00:23.102 ******* 2026-02-02 02:39:33.146300 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:39:33.146322 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:39:33.146339 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:39:33.146364 | orchestrator | changed: [testbed-manager] 2026-02-02 02:39:33.146390 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:39:33.146409 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:39:33.146427 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:39:33.146446 | orchestrator | 2026-02-02 02:39:33.146466 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-02 02:39:33.146485 | orchestrator | Monday 02 February 2026 02:39:09 +0000 (0:00:15.072) 0:00:38.174 ******* 2026-02-02 02:39:33.146533 | orchestrator | ok: [testbed-manager] 2026-02-02 02:39:33.146573 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:39:33.146592 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:39:33.146611 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:39:33.146628 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:39:33.146645 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:39:33.146663 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:39:33.146680 | orchestrator | 2026-02-02 02:39:33.146698 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-02 02:39:33.146718 | orchestrator | Monday 02 February 2026 02:39:09 +0000 (0:00:00.213) 0:00:38.388 ******* 2026-02-02 02:39:33.146737 | orchestrator | ok: [testbed-manager] 2026-02-02 02:39:33.146755 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:39:33.146772 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:39:33.146792 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:39:33.146811 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:39:33.146830 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:39:33.146845 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:39:33.146858 | orchestrator | 2026-02-02 02:39:33.146870 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-02 02:39:33.146883 | orchestrator | Monday 02 February 2026 02:39:10 +0000 (0:00:00.182) 0:00:38.571 ******* 2026-02-02 02:39:33.146909 | orchestrator | ok: [testbed-manager] 2026-02-02 02:39:33.146921 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:39:33.146966 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:39:33.146992 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:39:33.147029 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:39:33.147048 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:39:33.147066 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:39:33.147084 | orchestrator | 2026-02-02 02:39:33.147103 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-02 02:39:33.147140 | orchestrator | Monday 02 February 2026 02:39:10 +0000 (0:00:00.236) 0:00:38.807 ******* 2026-02-02 02:39:33.147161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:39:33.147182 | orchestrator | 2026-02-02 02:39:33.147201 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-02 02:39:33.147219 | orchestrator | Monday 02 February 2026 02:39:10 +0000 (0:00:00.288) 0:00:39.096 ******* 2026-02-02 02:39:33.147238 | orchestrator | ok: [testbed-manager] 2026-02-02 02:39:33.147257 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:39:33.147275 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:39:33.147293 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:39:33.147311 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:39:33.147328 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:39:33.147347 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:39:33.147366 | orchestrator | 2026-02-02 02:39:33.147385 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-02 02:39:33.147405 | orchestrator | Monday 02 February 2026 02:39:12 +0000 (0:00:01.583) 0:00:40.680 ******* 2026-02-02 02:39:33.147417 | orchestrator | changed: [testbed-manager] 2026-02-02 02:39:33.147428 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:39:33.147439 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:39:33.147450 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:39:33.147460 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:39:33.147471 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:39:33.147481 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:39:33.147518 | orchestrator | 2026-02-02 02:39:33.147530 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-02 02:39:33.147541 | orchestrator | Monday 02 February 2026 02:39:13 +0000 (0:00:00.982) 0:00:41.662 ******* 2026-02-02 02:39:33.147551 | orchestrator | ok: [testbed-manager] 2026-02-02 02:39:33.147562 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:39:33.147573 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:39:33.147599 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:39:33.147610 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:39:33.147621 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:39:33.147631 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:39:33.147642 | orchestrator | 2026-02-02 02:39:33.147653 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-02 02:39:33.147664 | orchestrator | Monday 02 February 2026 02:39:13 +0000 (0:00:00.786) 0:00:42.448 ******* 2026-02-02 02:39:33.147676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:39:33.147688 | orchestrator | 2026-02-02 02:39:33.147712 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-02 02:39:33.147724 | orchestrator | Monday 02 February 2026 02:39:14 +0000 (0:00:00.280) 0:00:42.729 ******* 2026-02-02 02:39:33.147734 | orchestrator | changed: [testbed-manager] 2026-02-02 02:39:33.147768 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:39:33.147779 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:39:33.147790 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:39:33.147801 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:39:33.147812 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:39:33.147822 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:39:33.147833 | orchestrator | 2026-02-02 02:39:33.147864 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-02 02:39:33.147876 | orchestrator | Monday 02 February 2026 02:39:15 +0000 (0:00:00.938) 0:00:43.668 ******* 2026-02-02 02:39:33.147887 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:39:33.147898 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:39:33.147909 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:39:33.147919 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:39:33.147930 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:39:33.147940 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:39:33.147950 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:39:33.147961 | orchestrator | 2026-02-02 02:39:33.147972 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-02 02:39:33.147983 | orchestrator | Monday 02 February 2026 02:39:15 +0000 (0:00:00.199) 0:00:43.868 ******* 2026-02-02 02:39:33.147994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:39:33.148005 | orchestrator | 2026-02-02 02:39:33.148015 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-02 02:39:33.148026 | orchestrator | Monday 02 February 2026 02:39:15 +0000 (0:00:00.301) 0:00:44.169 ******* 2026-02-02 02:39:33.148037 | orchestrator | ok: [testbed-manager] 2026-02-02 02:39:33.148047 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:39:33.148058 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:39:33.148068 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:39:33.148079 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:39:33.148089 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:39:33.148100 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:39:33.148110 | orchestrator | 2026-02-02 02:39:33.148121 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-02 02:39:33.148132 | orchestrator | Monday 02 February 2026 02:39:17 +0000 (0:00:01.561) 0:00:45.731 ******* 2026-02-02 02:39:33.148143 | orchestrator | changed: [testbed-manager] 2026-02-02 02:39:33.148153 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:39:33.148164 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:39:33.148175 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:39:33.148195 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:39:33.148215 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:39:33.148233 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:39:33.148264 | orchestrator | 2026-02-02 02:39:33.148285 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-02 02:39:33.148305 | orchestrator | Monday 02 February 2026 02:39:18 +0000 (0:00:01.072) 0:00:46.804 ******* 2026-02-02 02:39:33.148326 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:39:33.148345 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:39:33.148364 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:39:33.148383 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:39:33.148402 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:39:33.148421 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:39:33.148441 | orchestrator | changed: [testbed-manager] 2026-02-02 02:39:33.148462 | orchestrator | 2026-02-02 02:39:33.148484 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-02 02:39:33.148592 | orchestrator | Monday 02 February 2026 02:39:30 +0000 (0:00:12.154) 0:00:58.959 ******* 2026-02-02 02:39:33.148613 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:39:33.148669 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:39:33.148692 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:39:33.148712 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:39:33.148732 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:39:33.148752 | orchestrator | ok: [testbed-manager] 2026-02-02 02:39:33.148774 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:39:33.148796 | orchestrator | 2026-02-02 02:39:33.148818 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-02 02:39:33.148838 | orchestrator | Monday 02 February 2026 02:39:31 +0000 (0:00:01.185) 0:01:00.145 ******* 2026-02-02 02:39:33.148859 | orchestrator | ok: [testbed-manager] 2026-02-02 02:39:33.148879 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:39:33.148897 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:39:33.148914 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:39:33.148933 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:39:33.148954 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:39:33.148974 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:39:33.148993 | orchestrator | 2026-02-02 02:39:33.149014 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-02 02:39:33.149035 | orchestrator | Monday 02 February 2026 02:39:32 +0000 (0:00:00.847) 0:01:00.993 ******* 2026-02-02 02:39:33.149057 | orchestrator | ok: [testbed-manager] 2026-02-02 02:39:33.149078 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:39:33.149097 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:39:33.149116 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:39:33.149135 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:39:33.149154 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:39:33.149172 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:39:33.149192 | orchestrator | 2026-02-02 02:39:33.149212 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-02 02:39:33.149233 | orchestrator | Monday 02 February 2026 02:39:32 +0000 (0:00:00.209) 0:01:01.203 ******* 2026-02-02 02:39:33.149254 | orchestrator | ok: [testbed-manager] 2026-02-02 02:39:33.149275 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:39:33.149296 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:39:33.149316 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:39:33.149335 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:39:33.149356 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:39:33.149376 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:39:33.149396 | orchestrator | 2026-02-02 02:39:33.149429 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-02 02:39:33.149449 | orchestrator | Monday 02 February 2026 02:39:32 +0000 (0:00:00.178) 0:01:01.381 ******* 2026-02-02 02:39:33.149472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:39:33.149547 | orchestrator | 2026-02-02 02:39:33.149594 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-02 02:41:50.907764 | orchestrator | Monday 02 February 2026 02:39:33 +0000 (0:00:00.233) 0:01:01.614 ******* 2026-02-02 02:41:50.907860 | orchestrator | ok: [testbed-manager] 2026-02-02 02:41:50.907872 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:41:50.907880 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:41:50.907887 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:41:50.907894 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:41:50.907901 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:41:50.907907 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:41:50.907914 | orchestrator | 2026-02-02 02:41:50.907922 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-02 02:41:50.907929 | orchestrator | Monday 02 February 2026 02:39:34 +0000 (0:00:01.581) 0:01:03.195 ******* 2026-02-02 02:41:50.907935 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:41:50.907944 | orchestrator | changed: [testbed-manager] 2026-02-02 02:41:50.907950 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:41:50.907957 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:41:50.907964 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:41:50.907970 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:41:50.907977 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:41:50.907983 | orchestrator | 2026-02-02 02:41:50.907990 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-02 02:41:50.907998 | orchestrator | Monday 02 February 2026 02:39:35 +0000 (0:00:00.586) 0:01:03.781 ******* 2026-02-02 02:41:50.908004 | orchestrator | ok: [testbed-manager] 2026-02-02 02:41:50.908011 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:41:50.908017 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:41:50.908024 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:41:50.908030 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:41:50.908037 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:41:50.908044 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:41:50.908050 | orchestrator | 2026-02-02 02:41:50.908058 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-02 02:41:50.908065 | orchestrator | Monday 02 February 2026 02:39:35 +0000 (0:00:00.270) 0:01:04.052 ******* 2026-02-02 02:41:50.908074 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:41:50.908086 | orchestrator | ok: [testbed-manager] 2026-02-02 02:41:50.908098 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:41:50.908107 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:41:50.908114 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:41:50.908120 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:41:50.908127 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:41:50.908133 | orchestrator | 2026-02-02 02:41:50.908140 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-02 02:41:50.908147 | orchestrator | Monday 02 February 2026 02:39:36 +0000 (0:00:01.110) 0:01:05.163 ******* 2026-02-02 02:41:50.908154 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:41:50.908161 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:41:50.908167 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:41:50.908175 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:41:50.908184 | orchestrator | changed: [testbed-manager] 2026-02-02 02:41:50.908191 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:41:50.908198 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:41:50.908206 | orchestrator | 2026-02-02 02:41:50.908217 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-02 02:41:50.908225 | orchestrator | Monday 02 February 2026 02:39:38 +0000 (0:00:01.695) 0:01:06.858 ******* 2026-02-02 02:41:50.908232 | orchestrator | ok: [testbed-manager] 2026-02-02 02:41:50.908240 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:41:50.908248 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:41:50.908255 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:41:50.908263 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:41:50.908270 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:41:50.908278 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:41:50.908285 | orchestrator | 2026-02-02 02:41:50.908293 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-02 02:41:50.908319 | orchestrator | Monday 02 February 2026 02:39:40 +0000 (0:00:02.365) 0:01:09.224 ******* 2026-02-02 02:41:50.908327 | orchestrator | ok: [testbed-manager] 2026-02-02 02:41:50.908335 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:41:50.908343 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:41:50.908351 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:41:50.908358 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:41:50.908366 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:41:50.908373 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:41:50.908381 | orchestrator | 2026-02-02 02:41:50.908388 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-02 02:41:50.908396 | orchestrator | Monday 02 February 2026 02:40:19 +0000 (0:00:38.659) 0:01:47.883 ******* 2026-02-02 02:41:50.908404 | orchestrator | changed: [testbed-manager] 2026-02-02 02:41:50.908412 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:41:50.908420 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:41:50.908427 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:41:50.908435 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:41:50.908442 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:41:50.908450 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:41:50.908458 | orchestrator | 2026-02-02 02:41:50.908466 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-02 02:41:50.908473 | orchestrator | Monday 02 February 2026 02:41:33 +0000 (0:01:14.276) 0:03:02.159 ******* 2026-02-02 02:41:50.908481 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:41:50.908488 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:41:50.908496 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:41:50.908503 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:41:50.908510 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:41:50.908518 | orchestrator | ok: [testbed-manager] 2026-02-02 02:41:50.908526 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:41:50.908533 | orchestrator | 2026-02-02 02:41:50.908541 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-02 02:41:50.908550 | orchestrator | Monday 02 February 2026 02:41:35 +0000 (0:00:01.688) 0:03:03.848 ******* 2026-02-02 02:41:50.908557 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:41:50.908565 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:41:50.908594 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:41:50.908601 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:41:50.908607 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:41:50.908614 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:41:50.908621 | orchestrator | changed: [testbed-manager] 2026-02-02 02:41:50.908627 | orchestrator | 2026-02-02 02:41:50.908634 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-02 02:41:50.908641 | orchestrator | Monday 02 February 2026 02:41:48 +0000 (0:00:13.306) 0:03:17.155 ******* 2026-02-02 02:41:50.908676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-02 02:41:50.908699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-02 02:41:50.908715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-02 02:41:50.908723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-02 02:41:50.908730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-02 02:41:50.908784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-02 02:41:50.908792 | orchestrator | 2026-02-02 02:41:50.908799 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-02 02:41:50.908806 | orchestrator | Monday 02 February 2026 02:41:49 +0000 (0:00:00.399) 0:03:17.555 ******* 2026-02-02 02:41:50.908812 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-02 02:41:50.908819 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:41:50.908826 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-02 02:41:50.908833 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:41:50.908839 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-02 02:41:50.908846 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:41:50.908853 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-02 02:41:50.908859 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:41:50.908866 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 02:41:50.908873 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 02:41:50.908879 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 02:41:50.908886 | orchestrator | 2026-02-02 02:41:50.908893 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-02 02:41:50.908900 | orchestrator | Monday 02 February 2026 02:41:50 +0000 (0:00:01.718) 0:03:19.274 ******* 2026-02-02 02:41:50.908910 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-02 02:41:50.908918 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-02 02:41:50.908925 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-02 02:41:50.908932 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-02 02:41:50.908938 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-02 02:41:50.908950 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-02 02:41:55.549631 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-02 02:41:55.549749 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-02 02:41:55.549794 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-02 02:41:55.549806 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-02 02:41:55.549819 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-02 02:41:55.549830 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-02 02:41:55.549840 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-02 02:41:55.549851 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-02 02:41:55.549862 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:41:55.549874 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-02 02:41:55.549885 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-02 02:41:55.549897 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-02 02:41:55.549907 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-02 02:41:55.549918 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-02 02:41:55.549929 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-02 02:41:55.549940 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-02 02:41:55.549951 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-02 02:41:55.549961 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-02 02:41:55.549972 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-02 02:41:55.549983 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-02 02:41:55.549996 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-02 02:41:55.550014 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-02 02:41:55.550117 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:41:55.550140 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-02 02:41:55.550162 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-02 02:41:55.550181 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-02 02:41:55.550195 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-02 02:41:55.550208 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:41:55.550221 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-02 02:41:55.550233 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-02 02:41:55.550246 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-02 02:41:55.550258 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-02 02:41:55.550271 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-02 02:41:55.550284 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-02 02:41:55.550297 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-02 02:41:55.550310 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-02 02:41:55.550331 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-02 02:41:55.550344 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:41:55.550371 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-02 02:41:55.550384 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-02 02:41:55.550396 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-02 02:41:55.550409 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-02 02:41:55.550422 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-02 02:41:55.550455 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-02 02:41:55.550467 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-02 02:41:55.550478 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-02 02:41:55.550489 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-02 02:41:55.550499 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-02 02:41:55.550510 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-02 02:41:55.550521 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-02 02:41:55.550532 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-02 02:41:55.550542 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-02 02:41:55.550553 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-02 02:41:55.550563 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-02 02:41:55.550618 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-02 02:41:55.550630 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-02 02:41:55.550641 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-02 02:41:55.550652 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-02 02:41:55.550663 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-02 02:41:55.550674 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-02 02:41:55.550685 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-02 02:41:55.550696 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-02 02:41:55.550707 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-02 02:41:55.550717 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-02 02:41:55.550728 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-02 02:41:55.550739 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-02 02:41:55.550750 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-02 02:41:55.550762 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-02 02:41:55.550781 | orchestrator | 2026-02-02 02:41:55.550792 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-02 02:41:55.550804 | orchestrator | Monday 02 February 2026 02:41:54 +0000 (0:00:03.689) 0:03:22.964 ******* 2026-02-02 02:41:55.550815 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 02:41:55.550826 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 02:41:55.550837 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 02:41:55.550848 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 02:41:55.550858 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 02:41:55.550869 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 02:41:55.550880 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-02 02:41:55.550891 | orchestrator | 2026-02-02 02:41:55.550902 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-02 02:41:55.550913 | orchestrator | Monday 02 February 2026 02:41:55 +0000 (0:00:00.572) 0:03:23.536 ******* 2026-02-02 02:41:55.550924 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 02:41:55.550935 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:41:55.550946 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 02:41:55.550962 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 02:41:55.550974 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:41:55.550985 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:41:55.550996 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 02:41:55.551007 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:41:55.551018 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-02 02:41:55.551029 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-02 02:41:55.551047 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-02 02:42:11.628800 | orchestrator | 2026-02-02 02:42:11.628909 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-02 02:42:11.628926 | orchestrator | Monday 02 February 2026 02:41:55 +0000 (0:00:00.479) 0:03:24.016 ******* 2026-02-02 02:42:11.628938 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 02:42:11.628950 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 02:42:11.628961 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:42:11.628973 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:42:11.628984 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 02:42:11.628995 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-02 02:42:11.629006 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:42:11.629017 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:42:11.629028 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-02 02:42:11.629039 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-02 02:42:11.629050 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-02 02:42:11.629060 | orchestrator | 2026-02-02 02:42:11.629072 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-02 02:42:11.629107 | orchestrator | Monday 02 February 2026 02:41:57 +0000 (0:00:01.603) 0:03:25.619 ******* 2026-02-02 02:42:11.629119 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-02 02:42:11.629130 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:42:11.629141 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-02 02:42:11.629152 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-02 02:42:11.629162 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:42:11.629173 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:42:11.629184 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-02 02:42:11.629195 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:42:11.629206 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-02 02:42:11.629217 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-02 02:42:11.629227 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-02 02:42:11.629238 | orchestrator | 2026-02-02 02:42:11.629249 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-02 02:42:11.629260 | orchestrator | Monday 02 February 2026 02:41:59 +0000 (0:00:02.627) 0:03:28.247 ******* 2026-02-02 02:42:11.629271 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:42:11.629282 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:42:11.629293 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:42:11.629303 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:42:11.629314 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:42:11.629325 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:42:11.629336 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:42:11.629346 | orchestrator | 2026-02-02 02:42:11.629357 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-02 02:42:11.629368 | orchestrator | Monday 02 February 2026 02:42:00 +0000 (0:00:00.299) 0:03:28.546 ******* 2026-02-02 02:42:11.629379 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:42:11.629391 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:42:11.629402 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:42:11.629413 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:42:11.629424 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:42:11.629434 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:42:11.629445 | orchestrator | ok: [testbed-manager] 2026-02-02 02:42:11.629456 | orchestrator | 2026-02-02 02:42:11.629466 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-02 02:42:11.629477 | orchestrator | Monday 02 February 2026 02:42:05 +0000 (0:00:05.609) 0:03:34.156 ******* 2026-02-02 02:42:11.629488 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-02 02:42:11.629500 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-02 02:42:11.629510 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:42:11.629521 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-02 02:42:11.629532 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:42:11.629543 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:42:11.629553 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-02 02:42:11.629564 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:42:11.629575 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-02 02:42:11.629646 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:42:11.629675 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-02 02:42:11.629687 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:42:11.629698 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-02 02:42:11.629709 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:42:11.629728 | orchestrator | 2026-02-02 02:42:11.629739 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-02 02:42:11.629750 | orchestrator | Monday 02 February 2026 02:42:06 +0000 (0:00:00.336) 0:03:34.492 ******* 2026-02-02 02:42:11.629761 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-02 02:42:11.629772 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-02 02:42:11.629783 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-02 02:42:11.629811 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-02 02:42:11.629822 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-02 02:42:11.629833 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-02 02:42:11.629844 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-02 02:42:11.629855 | orchestrator | 2026-02-02 02:42:11.629865 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-02 02:42:11.629876 | orchestrator | Monday 02 February 2026 02:42:07 +0000 (0:00:01.169) 0:03:35.662 ******* 2026-02-02 02:42:11.629888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:42:11.629902 | orchestrator | 2026-02-02 02:42:11.629913 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-02 02:42:11.629924 | orchestrator | Monday 02 February 2026 02:42:07 +0000 (0:00:00.463) 0:03:36.126 ******* 2026-02-02 02:42:11.629934 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:42:11.629945 | orchestrator | ok: [testbed-manager] 2026-02-02 02:42:11.629956 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:42:11.629966 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:42:11.629977 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:42:11.629988 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:42:11.629998 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:42:11.630009 | orchestrator | 2026-02-02 02:42:11.630081 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-02 02:42:11.630093 | orchestrator | Monday 02 February 2026 02:42:08 +0000 (0:00:01.162) 0:03:37.288 ******* 2026-02-02 02:42:11.630104 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:42:11.630115 | orchestrator | ok: [testbed-manager] 2026-02-02 02:42:11.630126 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:42:11.630137 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:42:11.630147 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:42:11.630158 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:42:11.630169 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:42:11.630179 | orchestrator | 2026-02-02 02:42:11.630190 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-02 02:42:11.630201 | orchestrator | Monday 02 February 2026 02:42:09 +0000 (0:00:00.670) 0:03:37.959 ******* 2026-02-02 02:42:11.630212 | orchestrator | changed: [testbed-manager] 2026-02-02 02:42:11.630223 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:42:11.630234 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:42:11.630245 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:42:11.630255 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:42:11.630266 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:42:11.630277 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:42:11.630288 | orchestrator | 2026-02-02 02:42:11.630298 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-02 02:42:11.630310 | orchestrator | Monday 02 February 2026 02:42:10 +0000 (0:00:00.602) 0:03:38.561 ******* 2026-02-02 02:42:11.630320 | orchestrator | ok: [testbed-manager] 2026-02-02 02:42:11.630331 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:42:11.630342 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:42:11.630353 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:42:11.630364 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:42:11.630374 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:42:11.630385 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:42:11.630396 | orchestrator | 2026-02-02 02:42:11.630406 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-02 02:42:11.630425 | orchestrator | Monday 02 February 2026 02:42:10 +0000 (0:00:00.561) 0:03:39.123 ******* 2026-02-02 02:42:11.630441 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769998580.6170735, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:11.630455 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769998588.7489614, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:11.630473 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769998591.4439585, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:11.630508 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769998593.982237, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:16.374455 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769998601.4316638, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:16.374636 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769998585.840099, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:16.374657 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769998598.5646026, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:16.374697 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:16.374709 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:16.374736 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:16.374748 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:16.374780 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:16.374792 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:16.374804 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 02:42:16.374824 | orchestrator | 2026-02-02 02:42:16.374838 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-02 02:42:16.374851 | orchestrator | Monday 02 February 2026 02:42:11 +0000 (0:00:00.971) 0:03:40.095 ******* 2026-02-02 02:42:16.374862 | orchestrator | changed: [testbed-manager] 2026-02-02 02:42:16.374874 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:42:16.374885 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:42:16.374895 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:42:16.374906 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:42:16.374917 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:42:16.374928 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:42:16.374938 | orchestrator | 2026-02-02 02:42:16.374949 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-02 02:42:16.374960 | orchestrator | Monday 02 February 2026 02:42:12 +0000 (0:00:01.026) 0:03:41.121 ******* 2026-02-02 02:42:16.374971 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:42:16.374981 | orchestrator | changed: [testbed-manager] 2026-02-02 02:42:16.375009 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:42:16.375032 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:42:16.375045 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:42:16.375058 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:42:16.375070 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:42:16.375081 | orchestrator | 2026-02-02 02:42:16.375092 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-02 02:42:16.375103 | orchestrator | Monday 02 February 2026 02:42:13 +0000 (0:00:01.108) 0:03:42.230 ******* 2026-02-02 02:42:16.375114 | orchestrator | changed: [testbed-manager] 2026-02-02 02:42:16.375124 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:42:16.375135 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:42:16.375146 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:42:16.375156 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:42:16.375169 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:42:16.375189 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:42:16.375206 | orchestrator | 2026-02-02 02:42:16.375225 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-02 02:42:16.375244 | orchestrator | Monday 02 February 2026 02:42:14 +0000 (0:00:01.118) 0:03:43.348 ******* 2026-02-02 02:42:16.375264 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:42:16.375283 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:42:16.375310 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:42:16.375321 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:42:16.375332 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:42:16.375343 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:42:16.375353 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:42:16.375364 | orchestrator | 2026-02-02 02:42:16.375374 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-02 02:42:16.375385 | orchestrator | Monday 02 February 2026 02:42:15 +0000 (0:00:00.303) 0:03:43.651 ******* 2026-02-02 02:42:16.375396 | orchestrator | ok: [testbed-manager] 2026-02-02 02:42:16.375407 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:42:16.375432 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:42:16.375453 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:42:16.375464 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:42:16.375475 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:42:16.375485 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:42:16.375496 | orchestrator | 2026-02-02 02:42:16.375507 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-02 02:42:16.375518 | orchestrator | Monday 02 February 2026 02:42:15 +0000 (0:00:00.774) 0:03:44.426 ******* 2026-02-02 02:42:16.375530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:42:16.375550 | orchestrator | 2026-02-02 02:42:16.375562 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-02 02:42:16.375605 | orchestrator | Monday 02 February 2026 02:42:16 +0000 (0:00:00.417) 0:03:44.843 ******* 2026-02-02 02:43:30.505353 | orchestrator | ok: [testbed-manager] 2026-02-02 02:43:30.505443 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:43:30.505452 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:43:30.505457 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:43:30.505461 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:43:30.505465 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:43:30.505469 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:43:30.505473 | orchestrator | 2026-02-02 02:43:30.505479 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-02 02:43:30.505484 | orchestrator | Monday 02 February 2026 02:42:24 +0000 (0:00:07.737) 0:03:52.581 ******* 2026-02-02 02:43:30.505488 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:43:30.505492 | orchestrator | ok: [testbed-manager] 2026-02-02 02:43:30.505496 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:43:30.505500 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:43:30.505503 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:43:30.505507 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:43:30.505511 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:43:30.505515 | orchestrator | 2026-02-02 02:43:30.505519 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-02 02:43:30.505523 | orchestrator | Monday 02 February 2026 02:42:25 +0000 (0:00:01.258) 0:03:53.839 ******* 2026-02-02 02:43:30.505527 | orchestrator | ok: [testbed-manager] 2026-02-02 02:43:30.505531 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:43:30.505534 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:43:30.505538 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:43:30.505542 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:43:30.505546 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:43:30.505550 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:43:30.505553 | orchestrator | 2026-02-02 02:43:30.505557 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-02 02:43:30.505561 | orchestrator | Monday 02 February 2026 02:42:26 +0000 (0:00:01.102) 0:03:54.942 ******* 2026-02-02 02:43:30.505565 | orchestrator | ok: [testbed-manager] 2026-02-02 02:43:30.505569 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:43:30.505572 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:43:30.505576 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:43:30.505580 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:43:30.505584 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:43:30.505588 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:43:30.505592 | orchestrator | 2026-02-02 02:43:30.505596 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-02 02:43:30.505601 | orchestrator | Monday 02 February 2026 02:42:26 +0000 (0:00:00.324) 0:03:55.266 ******* 2026-02-02 02:43:30.505605 | orchestrator | ok: [testbed-manager] 2026-02-02 02:43:30.505637 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:43:30.505642 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:43:30.505647 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:43:30.505652 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:43:30.505659 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:43:30.505664 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:43:30.505670 | orchestrator | 2026-02-02 02:43:30.505676 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-02 02:43:30.505682 | orchestrator | Monday 02 February 2026 02:42:27 +0000 (0:00:00.350) 0:03:55.617 ******* 2026-02-02 02:43:30.505688 | orchestrator | ok: [testbed-manager] 2026-02-02 02:43:30.505694 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:43:30.505700 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:43:30.505733 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:43:30.505742 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:43:30.505753 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:43:30.505762 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:43:30.505772 | orchestrator | 2026-02-02 02:43:30.505782 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-02 02:43:30.505790 | orchestrator | Monday 02 February 2026 02:42:27 +0000 (0:00:00.327) 0:03:55.945 ******* 2026-02-02 02:43:30.505797 | orchestrator | ok: [testbed-manager] 2026-02-02 02:43:30.505803 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:43:30.505810 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:43:30.505816 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:43:30.505822 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:43:30.505829 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:43:30.505835 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:43:30.505841 | orchestrator | 2026-02-02 02:43:30.505848 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-02 02:43:30.505854 | orchestrator | Monday 02 February 2026 02:42:32 +0000 (0:00:05.333) 0:04:01.278 ******* 2026-02-02 02:43:30.505862 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:43:30.505871 | orchestrator | 2026-02-02 02:43:30.505877 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-02 02:43:30.505884 | orchestrator | Monday 02 February 2026 02:42:33 +0000 (0:00:00.401) 0:04:01.680 ******* 2026-02-02 02:43:30.505890 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-02 02:43:30.505896 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-02 02:43:30.505902 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-02 02:43:30.505908 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-02 02:43:30.505914 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:43:30.505936 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-02 02:43:30.505943 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:43:30.505949 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-02 02:43:30.505956 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-02 02:43:30.505962 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-02 02:43:30.505967 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:43:30.505973 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:43:30.505980 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-02 02:43:30.505986 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-02 02:43:30.505993 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-02 02:43:30.506000 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-02 02:43:30.506072 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:43:30.506082 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:43:30.506088 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-02 02:43:30.506095 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-02 02:43:30.506101 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:43:30.506108 | orchestrator | 2026-02-02 02:43:30.506115 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-02 02:43:30.506121 | orchestrator | Monday 02 February 2026 02:42:33 +0000 (0:00:00.359) 0:04:02.039 ******* 2026-02-02 02:43:30.506129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:43:30.506135 | orchestrator | 2026-02-02 02:43:30.506142 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-02 02:43:30.506158 | orchestrator | Monday 02 February 2026 02:42:33 +0000 (0:00:00.397) 0:04:02.436 ******* 2026-02-02 02:43:30.506165 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-02 02:43:30.506172 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:43:30.506179 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-02 02:43:30.506186 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-02 02:43:30.506192 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:43:30.506199 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:43:30.506206 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-02 02:43:30.506214 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-02 02:43:30.506220 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:43:30.506227 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-02 02:43:30.506234 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:43:30.506242 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:43:30.506249 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-02 02:43:30.506254 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:43:30.506259 | orchestrator | 2026-02-02 02:43:30.506264 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-02 02:43:30.506268 | orchestrator | Monday 02 February 2026 02:42:34 +0000 (0:00:00.343) 0:04:02.780 ******* 2026-02-02 02:43:30.506273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:43:30.506278 | orchestrator | 2026-02-02 02:43:30.506282 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-02 02:43:30.506287 | orchestrator | Monday 02 February 2026 02:42:34 +0000 (0:00:00.475) 0:04:03.256 ******* 2026-02-02 02:43:30.506291 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:43:30.506295 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:43:30.506299 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:43:30.506303 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:43:30.506306 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:43:30.506310 | orchestrator | changed: [testbed-manager] 2026-02-02 02:43:30.506314 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:43:30.506317 | orchestrator | 2026-02-02 02:43:30.506321 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-02 02:43:30.506325 | orchestrator | Monday 02 February 2026 02:43:07 +0000 (0:00:32.997) 0:04:36.253 ******* 2026-02-02 02:43:30.506329 | orchestrator | changed: [testbed-manager] 2026-02-02 02:43:30.506332 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:43:30.506336 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:43:30.506340 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:43:30.506343 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:43:30.506347 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:43:30.506351 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:43:30.506354 | orchestrator | 2026-02-02 02:43:30.506358 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-02 02:43:30.506367 | orchestrator | Monday 02 February 2026 02:43:15 +0000 (0:00:07.565) 0:04:43.819 ******* 2026-02-02 02:43:30.506370 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:43:30.506374 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:43:30.506378 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:43:30.506382 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:43:30.506385 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:43:30.506389 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:43:30.506392 | orchestrator | changed: [testbed-manager] 2026-02-02 02:43:30.506396 | orchestrator | 2026-02-02 02:43:30.506400 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-02 02:43:30.506408 | orchestrator | Monday 02 February 2026 02:43:22 +0000 (0:00:07.545) 0:04:51.364 ******* 2026-02-02 02:43:30.506411 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:43:30.506415 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:43:30.506419 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:43:30.506422 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:43:30.506426 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:43:30.506430 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:43:30.506434 | orchestrator | ok: [testbed-manager] 2026-02-02 02:43:30.506437 | orchestrator | 2026-02-02 02:43:30.506441 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-02 02:43:30.506445 | orchestrator | Monday 02 February 2026 02:43:24 +0000 (0:00:01.666) 0:04:53.031 ******* 2026-02-02 02:43:30.506449 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:43:30.506453 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:43:30.506456 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:43:30.506460 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:43:30.506464 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:43:30.506467 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:43:30.506471 | orchestrator | changed: [testbed-manager] 2026-02-02 02:43:30.506475 | orchestrator | 2026-02-02 02:43:30.506484 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-02 02:43:41.786683 | orchestrator | Monday 02 February 2026 02:43:30 +0000 (0:00:05.936) 0:04:58.968 ******* 2026-02-02 02:43:41.786803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:43:41.786822 | orchestrator | 2026-02-02 02:43:41.786830 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-02 02:43:41.786838 | orchestrator | Monday 02 February 2026 02:43:30 +0000 (0:00:00.444) 0:04:59.412 ******* 2026-02-02 02:43:41.786845 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:43:41.786853 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:43:41.786859 | orchestrator | changed: [testbed-manager] 2026-02-02 02:43:41.786866 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:43:41.786872 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:43:41.786878 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:43:41.786884 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:43:41.786890 | orchestrator | 2026-02-02 02:43:41.786897 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-02 02:43:41.786903 | orchestrator | Monday 02 February 2026 02:43:31 +0000 (0:00:00.716) 0:05:00.128 ******* 2026-02-02 02:43:41.786909 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:43:41.786917 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:43:41.786923 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:43:41.786929 | orchestrator | ok: [testbed-manager] 2026-02-02 02:43:41.786935 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:43:41.786941 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:43:41.786947 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:43:41.786954 | orchestrator | 2026-02-02 02:43:41.786960 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-02 02:43:41.786966 | orchestrator | Monday 02 February 2026 02:43:33 +0000 (0:00:01.652) 0:05:01.781 ******* 2026-02-02 02:43:41.786972 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:43:41.786979 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:43:41.786985 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:43:41.786991 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:43:41.786997 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:43:41.787004 | orchestrator | changed: [testbed-manager] 2026-02-02 02:43:41.787011 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:43:41.787017 | orchestrator | 2026-02-02 02:43:41.787023 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-02 02:43:41.787030 | orchestrator | Monday 02 February 2026 02:43:34 +0000 (0:00:00.756) 0:05:02.537 ******* 2026-02-02 02:43:41.787061 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:43:41.787071 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:43:41.787080 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:43:41.787089 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:43:41.787099 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:43:41.787109 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:43:41.787119 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:43:41.787129 | orchestrator | 2026-02-02 02:43:41.787139 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-02 02:43:41.787149 | orchestrator | Monday 02 February 2026 02:43:34 +0000 (0:00:00.317) 0:05:02.855 ******* 2026-02-02 02:43:41.787158 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:43:41.787169 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:43:41.787179 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:43:41.787189 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:43:41.787200 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:43:41.787211 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:43:41.787222 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:43:41.787232 | orchestrator | 2026-02-02 02:43:41.787243 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-02 02:43:41.787251 | orchestrator | Monday 02 February 2026 02:43:34 +0000 (0:00:00.435) 0:05:03.290 ******* 2026-02-02 02:43:41.787258 | orchestrator | ok: [testbed-manager] 2026-02-02 02:43:41.787266 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:43:41.787273 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:43:41.787282 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:43:41.787292 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:43:41.787301 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:43:41.787311 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:43:41.787320 | orchestrator | 2026-02-02 02:43:41.787329 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-02 02:43:41.787355 | orchestrator | Monday 02 February 2026 02:43:35 +0000 (0:00:00.306) 0:05:03.596 ******* 2026-02-02 02:43:41.787366 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:43:41.787377 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:43:41.787388 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:43:41.787399 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:43:41.787409 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:43:41.787483 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:43:41.787491 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:43:41.787499 | orchestrator | 2026-02-02 02:43:41.787506 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-02 02:43:41.787515 | orchestrator | Monday 02 February 2026 02:43:35 +0000 (0:00:00.298) 0:05:03.894 ******* 2026-02-02 02:43:41.787522 | orchestrator | ok: [testbed-manager] 2026-02-02 02:43:41.787528 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:43:41.787534 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:43:41.787540 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:43:41.787546 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:43:41.787552 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:43:41.787558 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:43:41.787564 | orchestrator | 2026-02-02 02:43:41.787571 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-02 02:43:41.787582 | orchestrator | Monday 02 February 2026 02:43:35 +0000 (0:00:00.332) 0:05:04.227 ******* 2026-02-02 02:43:41.787594 | orchestrator | ok: [testbed-manager] =>  2026-02-02 02:43:41.787604 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 02:43:41.787672 | orchestrator | ok: [testbed-node-3] =>  2026-02-02 02:43:41.787685 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 02:43:41.787696 | orchestrator | ok: [testbed-node-4] =>  2026-02-02 02:43:41.787706 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 02:43:41.787717 | orchestrator | ok: [testbed-node-5] =>  2026-02-02 02:43:41.787727 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 02:43:41.787771 | orchestrator | ok: [testbed-node-0] =>  2026-02-02 02:43:41.787778 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 02:43:41.787785 | orchestrator | ok: [testbed-node-1] =>  2026-02-02 02:43:41.787791 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 02:43:41.787797 | orchestrator | ok: [testbed-node-2] =>  2026-02-02 02:43:41.787803 | orchestrator |  docker_version: 5:27.5.1 2026-02-02 02:43:41.787809 | orchestrator | 2026-02-02 02:43:41.787818 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-02 02:43:41.787828 | orchestrator | Monday 02 February 2026 02:43:36 +0000 (0:00:00.299) 0:05:04.527 ******* 2026-02-02 02:43:41.787838 | orchestrator | ok: [testbed-manager] =>  2026-02-02 02:43:41.787848 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 02:43:41.787860 | orchestrator | ok: [testbed-node-3] =>  2026-02-02 02:43:41.787870 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 02:43:41.787879 | orchestrator | ok: [testbed-node-4] =>  2026-02-02 02:43:41.787890 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 02:43:41.787901 | orchestrator | ok: [testbed-node-5] =>  2026-02-02 02:43:41.787907 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 02:43:41.787913 | orchestrator | ok: [testbed-node-0] =>  2026-02-02 02:43:41.787920 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 02:43:41.787926 | orchestrator | ok: [testbed-node-1] =>  2026-02-02 02:43:41.787932 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 02:43:41.787938 | orchestrator | ok: [testbed-node-2] =>  2026-02-02 02:43:41.787944 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-02 02:43:41.787950 | orchestrator | 2026-02-02 02:43:41.787957 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-02 02:43:41.787963 | orchestrator | Monday 02 February 2026 02:43:36 +0000 (0:00:00.306) 0:05:04.833 ******* 2026-02-02 02:43:41.787969 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:43:41.787975 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:43:41.787981 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:43:41.787987 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:43:41.787993 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:43:41.787999 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:43:41.788005 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:43:41.788011 | orchestrator | 2026-02-02 02:43:41.788017 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-02 02:43:41.788023 | orchestrator | Monday 02 February 2026 02:43:36 +0000 (0:00:00.316) 0:05:05.150 ******* 2026-02-02 02:43:41.788030 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:43:41.788035 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:43:41.788041 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:43:41.788047 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:43:41.788054 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:43:41.788059 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:43:41.788066 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:43:41.788072 | orchestrator | 2026-02-02 02:43:41.788078 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-02 02:43:41.788084 | orchestrator | Monday 02 February 2026 02:43:36 +0000 (0:00:00.295) 0:05:05.446 ******* 2026-02-02 02:43:41.788095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:43:41.788107 | orchestrator | 2026-02-02 02:43:41.788117 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-02 02:43:41.788127 | orchestrator | Monday 02 February 2026 02:43:37 +0000 (0:00:00.470) 0:05:05.916 ******* 2026-02-02 02:43:41.788137 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:43:41.788148 | orchestrator | ok: [testbed-manager] 2026-02-02 02:43:41.788158 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:43:41.788169 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:43:41.788178 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:43:41.788195 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:43:41.788202 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:43:41.788208 | orchestrator | 2026-02-02 02:43:41.788217 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-02 02:43:41.788227 | orchestrator | Monday 02 February 2026 02:43:38 +0000 (0:00:00.935) 0:05:06.851 ******* 2026-02-02 02:43:41.788237 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:43:41.788251 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:43:41.788265 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:43:41.788274 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:43:41.788284 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:43:41.788303 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:43:41.788312 | orchestrator | ok: [testbed-manager] 2026-02-02 02:43:41.788322 | orchestrator | 2026-02-02 02:43:41.788332 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-02 02:43:41.788343 | orchestrator | Monday 02 February 2026 02:43:41 +0000 (0:00:02.953) 0:05:09.805 ******* 2026-02-02 02:43:41.788354 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-02 02:43:41.788365 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-02 02:43:41.788375 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-02 02:43:41.788384 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-02 02:43:41.788394 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-02 02:43:41.788404 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-02 02:43:41.788414 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:43:41.788424 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-02 02:43:41.788434 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-02 02:43:41.788443 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-02 02:43:41.788453 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:43:41.788459 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-02 02:43:41.788466 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:43:41.788472 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-02 02:43:41.788478 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-02 02:43:41.788484 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-02 02:43:41.788500 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-02 02:44:39.921475 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-02 02:44:39.921625 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:44:39.921734 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-02 02:44:39.921749 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-02 02:44:39.921761 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-02 02:44:39.921771 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:44:39.921782 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:44:39.921794 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-02 02:44:39.921805 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-02 02:44:39.921816 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-02 02:44:39.921826 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:44:39.921838 | orchestrator | 2026-02-02 02:44:39.921850 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-02 02:44:39.921862 | orchestrator | Monday 02 February 2026 02:43:42 +0000 (0:00:00.691) 0:05:10.496 ******* 2026-02-02 02:44:39.921874 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:39.921885 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:39.921896 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:39.921907 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:39.921919 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:39.921932 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:39.921974 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:39.921988 | orchestrator | 2026-02-02 02:44:39.922001 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-02 02:44:39.922077 | orchestrator | Monday 02 February 2026 02:43:48 +0000 (0:00:06.219) 0:05:16.715 ******* 2026-02-02 02:44:39.922093 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:39.922105 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:39.922118 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:39.922130 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:39.922143 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:39.922156 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:39.922168 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:39.922178 | orchestrator | 2026-02-02 02:44:39.922189 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-02 02:44:39.922200 | orchestrator | Monday 02 February 2026 02:43:49 +0000 (0:00:01.023) 0:05:17.738 ******* 2026-02-02 02:44:39.922211 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:39.922222 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:39.922242 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:39.922260 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:39.922271 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:39.922282 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:39.922293 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:39.922303 | orchestrator | 2026-02-02 02:44:39.922314 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-02 02:44:39.922325 | orchestrator | Monday 02 February 2026 02:43:55 +0000 (0:00:06.623) 0:05:24.362 ******* 2026-02-02 02:44:39.922336 | orchestrator | changed: [testbed-manager] 2026-02-02 02:44:39.922347 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:39.922396 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:39.922410 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:39.922421 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:39.922431 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:39.922442 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:39.922453 | orchestrator | 2026-02-02 02:44:39.922464 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-02 02:44:39.922474 | orchestrator | Monday 02 February 2026 02:43:59 +0000 (0:00:03.314) 0:05:27.676 ******* 2026-02-02 02:44:39.922485 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:39.922496 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:39.922506 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:39.922517 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:39.922528 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:39.922538 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:39.922549 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:39.922560 | orchestrator | 2026-02-02 02:44:39.922571 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-02 02:44:39.922581 | orchestrator | Monday 02 February 2026 02:44:00 +0000 (0:00:01.284) 0:05:28.961 ******* 2026-02-02 02:44:39.922592 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:39.922603 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:39.922614 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:39.922624 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:39.922661 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:39.922674 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:39.922721 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:39.922734 | orchestrator | 2026-02-02 02:44:39.922746 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-02 02:44:39.922756 | orchestrator | Monday 02 February 2026 02:44:01 +0000 (0:00:01.483) 0:05:30.445 ******* 2026-02-02 02:44:39.922767 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:44:39.922778 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:44:39.922788 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:44:39.922799 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:44:39.922821 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:44:39.922832 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:44:39.922843 | orchestrator | changed: [testbed-manager] 2026-02-02 02:44:39.922853 | orchestrator | 2026-02-02 02:44:39.922864 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-02 02:44:39.922875 | orchestrator | Monday 02 February 2026 02:44:02 +0000 (0:00:00.664) 0:05:31.110 ******* 2026-02-02 02:44:39.922886 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:39.922897 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:39.922936 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:39.922947 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:39.922958 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:39.922969 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:39.922979 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:39.922990 | orchestrator | 2026-02-02 02:44:39.923001 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-02 02:44:39.923032 | orchestrator | Monday 02 February 2026 02:44:11 +0000 (0:00:09.244) 0:05:40.354 ******* 2026-02-02 02:44:39.923043 | orchestrator | changed: [testbed-manager] 2026-02-02 02:44:39.923065 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:39.923076 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:39.923087 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:39.923098 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:39.923108 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:39.923119 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:39.923129 | orchestrator | 2026-02-02 02:44:39.923140 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-02 02:44:39.923151 | orchestrator | Monday 02 February 2026 02:44:13 +0000 (0:00:01.855) 0:05:42.210 ******* 2026-02-02 02:44:39.923162 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:39.923173 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:39.923184 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:39.923194 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:39.923205 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:39.923216 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:39.923226 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:39.923237 | orchestrator | 2026-02-02 02:44:39.923248 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-02 02:44:39.923259 | orchestrator | Monday 02 February 2026 02:44:22 +0000 (0:00:08.746) 0:05:50.956 ******* 2026-02-02 02:44:39.923270 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:39.923280 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:39.923291 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:39.923301 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:39.923312 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:39.923323 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:39.923333 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:39.923344 | orchestrator | 2026-02-02 02:44:39.923355 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-02 02:44:39.923366 | orchestrator | Monday 02 February 2026 02:44:33 +0000 (0:00:10.963) 0:06:01.919 ******* 2026-02-02 02:44:39.923377 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-02 02:44:39.923388 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-02 02:44:39.923398 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-02 02:44:39.923409 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-02 02:44:39.923420 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-02 02:44:39.923430 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-02 02:44:39.923441 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-02 02:44:39.923452 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-02 02:44:39.923463 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-02 02:44:39.923481 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-02 02:44:39.923492 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-02 02:44:39.923550 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-02 02:44:39.923562 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-02 02:44:39.923573 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-02 02:44:39.923584 | orchestrator | 2026-02-02 02:44:39.923595 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-02 02:44:39.923606 | orchestrator | Monday 02 February 2026 02:44:34 +0000 (0:00:01.187) 0:06:03.106 ******* 2026-02-02 02:44:39.923616 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:44:39.923627 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:44:39.923660 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:44:39.923672 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:44:39.923683 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:44:39.923693 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:44:39.923704 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:44:39.923715 | orchestrator | 2026-02-02 02:44:39.923733 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-02 02:44:39.923752 | orchestrator | Monday 02 February 2026 02:44:35 +0000 (0:00:00.523) 0:06:03.630 ******* 2026-02-02 02:44:39.923769 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:39.923787 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:39.923804 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:39.923822 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:39.923839 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:39.923856 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:39.923881 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:39.923899 | orchestrator | 2026-02-02 02:44:39.923919 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-02 02:44:39.923982 | orchestrator | Monday 02 February 2026 02:44:38 +0000 (0:00:03.783) 0:06:07.414 ******* 2026-02-02 02:44:39.924004 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:44:39.924025 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:44:39.924046 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:44:39.924067 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:44:39.924087 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:44:39.924103 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:44:39.924114 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:44:39.924125 | orchestrator | 2026-02-02 02:44:39.924137 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-02 02:44:39.924148 | orchestrator | Monday 02 February 2026 02:44:39 +0000 (0:00:00.672) 0:06:08.087 ******* 2026-02-02 02:44:39.924159 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-02 02:44:39.924196 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-02 02:44:39.924208 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:44:39.924242 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-02 02:44:39.924254 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-02 02:44:39.924265 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:44:39.924275 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-02 02:44:39.924286 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-02 02:44:39.924297 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:44:39.924321 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-02 02:44:59.384853 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-02 02:44:59.384970 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:44:59.384986 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-02 02:44:59.384998 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-02 02:44:59.385009 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:44:59.385047 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-02 02:44:59.385059 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-02 02:44:59.385069 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:44:59.385080 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-02 02:44:59.385091 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-02 02:44:59.385101 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:44:59.385112 | orchestrator | 2026-02-02 02:44:59.385125 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-02 02:44:59.385137 | orchestrator | Monday 02 February 2026 02:44:40 +0000 (0:00:00.575) 0:06:08.663 ******* 2026-02-02 02:44:59.385148 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:44:59.385159 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:44:59.385169 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:44:59.385180 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:44:59.385196 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:44:59.385214 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:44:59.385230 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:44:59.385247 | orchestrator | 2026-02-02 02:44:59.385265 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-02 02:44:59.385283 | orchestrator | Monday 02 February 2026 02:44:40 +0000 (0:00:00.507) 0:06:09.171 ******* 2026-02-02 02:44:59.385299 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:44:59.385317 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:44:59.385335 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:44:59.385354 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:44:59.385375 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:44:59.385391 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:44:59.385404 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:44:59.385416 | orchestrator | 2026-02-02 02:44:59.385429 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-02 02:44:59.385442 | orchestrator | Monday 02 February 2026 02:44:41 +0000 (0:00:00.548) 0:06:09.719 ******* 2026-02-02 02:44:59.385454 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:44:59.385466 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:44:59.385478 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:44:59.385491 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:44:59.385503 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:44:59.385515 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:44:59.385527 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:44:59.385539 | orchestrator | 2026-02-02 02:44:59.385551 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-02 02:44:59.385561 | orchestrator | Monday 02 February 2026 02:44:41 +0000 (0:00:00.547) 0:06:10.266 ******* 2026-02-02 02:44:59.385572 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:59.385583 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:44:59.385594 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:44:59.385605 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:44:59.385615 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:44:59.385626 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:44:59.385637 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:44:59.385682 | orchestrator | 2026-02-02 02:44:59.385695 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-02 02:44:59.385714 | orchestrator | Monday 02 February 2026 02:44:43 +0000 (0:00:01.877) 0:06:12.144 ******* 2026-02-02 02:44:59.385733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:44:59.385752 | orchestrator | 2026-02-02 02:44:59.385768 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-02 02:44:59.385785 | orchestrator | Monday 02 February 2026 02:44:44 +0000 (0:00:00.898) 0:06:13.043 ******* 2026-02-02 02:44:59.385827 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:59.385848 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:59.385888 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:59.385912 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:59.385924 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:59.385935 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:59.385945 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:59.385956 | orchestrator | 2026-02-02 02:44:59.385967 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-02 02:44:59.385978 | orchestrator | Monday 02 February 2026 02:44:45 +0000 (0:00:00.827) 0:06:13.870 ******* 2026-02-02 02:44:59.385988 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:59.385999 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:59.386010 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:59.386087 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:59.386098 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:59.386109 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:59.386120 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:59.386131 | orchestrator | 2026-02-02 02:44:59.386142 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-02 02:44:59.386153 | orchestrator | Monday 02 February 2026 02:44:46 +0000 (0:00:00.861) 0:06:14.732 ******* 2026-02-02 02:44:59.386164 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:59.386175 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:59.386185 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:59.386197 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:59.386207 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:59.386218 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:59.386228 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:59.386239 | orchestrator | 2026-02-02 02:44:59.386250 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-02 02:44:59.386281 | orchestrator | Monday 02 February 2026 02:44:47 +0000 (0:00:01.580) 0:06:16.312 ******* 2026-02-02 02:44:59.386293 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:44:59.386304 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:44:59.386315 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:44:59.386325 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:44:59.386336 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:44:59.386347 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:44:59.386358 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:44:59.386368 | orchestrator | 2026-02-02 02:44:59.386379 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-02 02:44:59.386390 | orchestrator | Monday 02 February 2026 02:44:49 +0000 (0:00:01.377) 0:06:17.689 ******* 2026-02-02 02:44:59.386401 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:59.386411 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:59.386422 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:59.386433 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:59.386443 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:59.386454 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:59.386465 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:59.386475 | orchestrator | 2026-02-02 02:44:59.386486 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-02 02:44:59.386497 | orchestrator | Monday 02 February 2026 02:44:50 +0000 (0:00:01.283) 0:06:18.973 ******* 2026-02-02 02:44:59.386508 | orchestrator | changed: [testbed-manager] 2026-02-02 02:44:59.386519 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:44:59.386529 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:44:59.386540 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:44:59.386561 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:44:59.386572 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:44:59.386583 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:44:59.386594 | orchestrator | 2026-02-02 02:44:59.386613 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-02 02:44:59.386624 | orchestrator | Monday 02 February 2026 02:44:51 +0000 (0:00:01.377) 0:06:20.351 ******* 2026-02-02 02:44:59.386635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:44:59.386674 | orchestrator | 2026-02-02 02:44:59.386685 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-02 02:44:59.386696 | orchestrator | Monday 02 February 2026 02:44:53 +0000 (0:00:01.148) 0:06:21.499 ******* 2026-02-02 02:44:59.386707 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:44:59.386718 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:44:59.386728 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:59.386739 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:44:59.386750 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:44:59.386760 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:44:59.386771 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:44:59.386782 | orchestrator | 2026-02-02 02:44:59.386794 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-02 02:44:59.386812 | orchestrator | Monday 02 February 2026 02:44:54 +0000 (0:00:01.341) 0:06:22.841 ******* 2026-02-02 02:44:59.386831 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:44:59.386849 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:59.386868 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:44:59.386888 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:44:59.386907 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:44:59.386925 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:44:59.386940 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:44:59.386951 | orchestrator | 2026-02-02 02:44:59.386962 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-02 02:44:59.386974 | orchestrator | Monday 02 February 2026 02:44:55 +0000 (0:00:01.210) 0:06:24.052 ******* 2026-02-02 02:44:59.386985 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:59.386995 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:44:59.387006 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:44:59.387017 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:44:59.387027 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:44:59.387038 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:44:59.387048 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:44:59.387060 | orchestrator | 2026-02-02 02:44:59.387079 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-02 02:44:59.387097 | orchestrator | Monday 02 February 2026 02:44:56 +0000 (0:00:01.135) 0:06:25.188 ******* 2026-02-02 02:44:59.387114 | orchestrator | ok: [testbed-manager] 2026-02-02 02:44:59.387155 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:44:59.387181 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:44:59.387199 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:44:59.387218 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:44:59.387235 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:44:59.387246 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:44:59.387257 | orchestrator | 2026-02-02 02:44:59.387268 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-02 02:44:59.387279 | orchestrator | Monday 02 February 2026 02:44:58 +0000 (0:00:01.362) 0:06:26.551 ******* 2026-02-02 02:44:59.387289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:44:59.387301 | orchestrator | 2026-02-02 02:44:59.387311 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 02:44:59.387322 | orchestrator | Monday 02 February 2026 02:44:59 +0000 (0:00:00.962) 0:06:27.514 ******* 2026-02-02 02:44:59.387333 | orchestrator | 2026-02-02 02:44:59.387344 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 02:44:59.387366 | orchestrator | Monday 02 February 2026 02:44:59 +0000 (0:00:00.059) 0:06:27.573 ******* 2026-02-02 02:44:59.387377 | orchestrator | 2026-02-02 02:44:59.387388 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 02:44:59.387399 | orchestrator | Monday 02 February 2026 02:44:59 +0000 (0:00:00.040) 0:06:27.614 ******* 2026-02-02 02:44:59.387409 | orchestrator | 2026-02-02 02:44:59.387420 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 02:44:59.387442 | orchestrator | Monday 02 February 2026 02:44:59 +0000 (0:00:00.039) 0:06:27.653 ******* 2026-02-02 02:45:24.813949 | orchestrator | 2026-02-02 02:45:24.814096 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 02:45:24.814110 | orchestrator | Monday 02 February 2026 02:44:59 +0000 (0:00:00.047) 0:06:27.701 ******* 2026-02-02 02:45:24.814118 | orchestrator | 2026-02-02 02:45:24.814126 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 02:45:24.814133 | orchestrator | Monday 02 February 2026 02:44:59 +0000 (0:00:00.040) 0:06:27.741 ******* 2026-02-02 02:45:24.814141 | orchestrator | 2026-02-02 02:45:24.814149 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-02 02:45:24.814156 | orchestrator | Monday 02 February 2026 02:44:59 +0000 (0:00:00.039) 0:06:27.781 ******* 2026-02-02 02:45:24.814171 | orchestrator | 2026-02-02 02:45:24.814179 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-02 02:45:24.814187 | orchestrator | Monday 02 February 2026 02:44:59 +0000 (0:00:00.061) 0:06:27.843 ******* 2026-02-02 02:45:24.814194 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:45:24.814203 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:45:24.814210 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:45:24.814217 | orchestrator | 2026-02-02 02:45:24.814225 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-02 02:45:24.814232 | orchestrator | Monday 02 February 2026 02:45:00 +0000 (0:00:01.086) 0:06:28.929 ******* 2026-02-02 02:45:24.814240 | orchestrator | changed: [testbed-manager] 2026-02-02 02:45:24.814248 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:45:24.814255 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:45:24.814262 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:45:24.814274 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:45:24.814286 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:45:24.814298 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:45:24.814310 | orchestrator | 2026-02-02 02:45:24.814322 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-02 02:45:24.814334 | orchestrator | Monday 02 February 2026 02:45:01 +0000 (0:00:01.501) 0:06:30.430 ******* 2026-02-02 02:45:24.814346 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:45:24.814358 | orchestrator | changed: [testbed-manager] 2026-02-02 02:45:24.814369 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:45:24.814381 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:45:24.814392 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:45:24.814402 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:45:24.814413 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:45:24.814426 | orchestrator | 2026-02-02 02:45:24.814438 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-02 02:45:24.814451 | orchestrator | Monday 02 February 2026 02:45:03 +0000 (0:00:01.185) 0:06:31.616 ******* 2026-02-02 02:45:24.814465 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:45:24.814478 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:45:24.814492 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:45:24.814504 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:45:24.814517 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:45:24.814525 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:45:24.814534 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:45:24.814543 | orchestrator | 2026-02-02 02:45:24.814552 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-02 02:45:24.814561 | orchestrator | Monday 02 February 2026 02:45:05 +0000 (0:00:02.381) 0:06:33.997 ******* 2026-02-02 02:45:24.814591 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:45:24.814599 | orchestrator | 2026-02-02 02:45:24.814607 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-02 02:45:24.814614 | orchestrator | Monday 02 February 2026 02:45:05 +0000 (0:00:00.098) 0:06:34.096 ******* 2026-02-02 02:45:24.814621 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:24.814628 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:45:24.814635 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:45:24.814642 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:45:24.814672 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:45:24.814680 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:45:24.814688 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:45:24.814695 | orchestrator | 2026-02-02 02:45:24.814702 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-02 02:45:24.814711 | orchestrator | Monday 02 February 2026 02:45:06 +0000 (0:00:00.977) 0:06:35.074 ******* 2026-02-02 02:45:24.814739 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:45:24.814767 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:45:24.814780 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:45:24.814792 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:45:24.814803 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:45:24.814816 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:45:24.814829 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:45:24.814840 | orchestrator | 2026-02-02 02:45:24.814853 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-02 02:45:24.814866 | orchestrator | Monday 02 February 2026 02:45:07 +0000 (0:00:00.600) 0:06:35.674 ******* 2026-02-02 02:45:24.814880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:45:24.814896 | orchestrator | 2026-02-02 02:45:24.814909 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-02 02:45:24.814924 | orchestrator | Monday 02 February 2026 02:45:08 +0000 (0:00:01.116) 0:06:36.791 ******* 2026-02-02 02:45:24.814937 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:24.814950 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:45:24.814957 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:45:24.814969 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:45:24.814981 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:45:24.814993 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:45:24.815004 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:45:24.815016 | orchestrator | 2026-02-02 02:45:24.815027 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-02 02:45:24.815037 | orchestrator | Monday 02 February 2026 02:45:09 +0000 (0:00:00.808) 0:06:37.599 ******* 2026-02-02 02:45:24.815049 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-02 02:45:24.815082 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-02 02:45:24.815097 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-02 02:45:24.815110 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-02 02:45:24.815122 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-02 02:45:24.815135 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-02 02:45:24.815142 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-02 02:45:24.815149 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-02 02:45:24.815157 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-02 02:45:24.815164 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-02 02:45:24.815171 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-02 02:45:24.815178 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-02 02:45:24.815195 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-02 02:45:24.815202 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-02 02:45:24.815209 | orchestrator | 2026-02-02 02:45:24.815216 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-02 02:45:24.815223 | orchestrator | Monday 02 February 2026 02:45:11 +0000 (0:00:02.638) 0:06:40.238 ******* 2026-02-02 02:45:24.815230 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:45:24.815237 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:45:24.815245 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:45:24.815252 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:45:24.815259 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:45:24.815266 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:45:24.815273 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:45:24.815280 | orchestrator | 2026-02-02 02:45:24.815287 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-02 02:45:24.815295 | orchestrator | Monday 02 February 2026 02:45:12 +0000 (0:00:00.511) 0:06:40.750 ******* 2026-02-02 02:45:24.815303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:45:24.815312 | orchestrator | 2026-02-02 02:45:24.815319 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-02 02:45:24.815326 | orchestrator | Monday 02 February 2026 02:45:13 +0000 (0:00:00.931) 0:06:41.681 ******* 2026-02-02 02:45:24.815333 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:24.815340 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:45:24.815347 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:45:24.815356 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:45:24.815368 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:45:24.815380 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:45:24.815391 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:45:24.815404 | orchestrator | 2026-02-02 02:45:24.815416 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-02 02:45:24.815429 | orchestrator | Monday 02 February 2026 02:45:14 +0000 (0:00:00.854) 0:06:42.536 ******* 2026-02-02 02:45:24.815437 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:45:24.815445 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:24.815452 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:45:24.815459 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:45:24.815466 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:45:24.815473 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:45:24.815480 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:45:24.815487 | orchestrator | 2026-02-02 02:45:24.815494 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-02 02:45:24.815501 | orchestrator | Monday 02 February 2026 02:45:15 +0000 (0:00:01.110) 0:06:43.646 ******* 2026-02-02 02:45:24.815508 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:45:24.815516 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:45:24.815523 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:45:24.815530 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:45:24.815537 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:45:24.815544 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:45:24.815551 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:45:24.815558 | orchestrator | 2026-02-02 02:45:24.815566 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-02 02:45:24.815573 | orchestrator | Monday 02 February 2026 02:45:15 +0000 (0:00:00.521) 0:06:44.168 ******* 2026-02-02 02:45:24.815580 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:45:24.815587 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:45:24.815594 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:24.815601 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:45:24.815608 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:45:24.815621 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:45:24.815629 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:45:24.815636 | orchestrator | 2026-02-02 02:45:24.815643 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-02 02:45:24.815673 | orchestrator | Monday 02 February 2026 02:45:17 +0000 (0:00:01.354) 0:06:45.522 ******* 2026-02-02 02:45:24.815681 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:45:24.815688 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:45:24.815695 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:45:24.815702 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:45:24.815709 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:45:24.815717 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:45:24.815724 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:45:24.815731 | orchestrator | 2026-02-02 02:45:24.815738 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-02 02:45:24.815746 | orchestrator | Monday 02 February 2026 02:45:17 +0000 (0:00:00.527) 0:06:46.049 ******* 2026-02-02 02:45:24.815753 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:24.815760 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:45:24.815767 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:45:24.815774 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:45:24.815781 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:45:24.815788 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:45:24.815801 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:45:56.952879 | orchestrator | 2026-02-02 02:45:56.953000 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-02 02:45:56.953019 | orchestrator | Monday 02 February 2026 02:45:24 +0000 (0:00:07.218) 0:06:53.267 ******* 2026-02-02 02:45:56.953034 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:56.953049 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:45:56.953063 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:45:56.953076 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:45:56.953089 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:45:56.953102 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:45:56.953115 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:45:56.953128 | orchestrator | 2026-02-02 02:45:56.953141 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-02 02:45:56.953154 | orchestrator | Monday 02 February 2026 02:45:26 +0000 (0:00:01.555) 0:06:54.823 ******* 2026-02-02 02:45:56.953167 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:56.953180 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:45:56.953193 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:45:56.953206 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:45:56.953218 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:45:56.953231 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:45:56.953244 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:45:56.953257 | orchestrator | 2026-02-02 02:45:56.953270 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-02 02:45:56.953283 | orchestrator | Monday 02 February 2026 02:45:28 +0000 (0:00:01.661) 0:06:56.485 ******* 2026-02-02 02:45:56.953296 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:56.953308 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:45:56.953321 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:45:56.953333 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:45:56.953346 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:45:56.953359 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:45:56.953372 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:45:56.953385 | orchestrator | 2026-02-02 02:45:56.953398 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-02 02:45:56.953411 | orchestrator | Monday 02 February 2026 02:45:29 +0000 (0:00:01.672) 0:06:58.158 ******* 2026-02-02 02:45:56.953425 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:56.953439 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:45:56.953455 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:45:56.953498 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:45:56.953514 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:45:56.953529 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:45:56.953544 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:45:56.953559 | orchestrator | 2026-02-02 02:45:56.953574 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-02 02:45:56.953589 | orchestrator | Monday 02 February 2026 02:45:30 +0000 (0:00:00.922) 0:06:59.080 ******* 2026-02-02 02:45:56.953605 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:45:56.953621 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:45:56.953636 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:45:56.953651 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:45:56.953748 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:45:56.953764 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:45:56.953778 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:45:56.953792 | orchestrator | 2026-02-02 02:45:56.953804 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-02 02:45:56.953818 | orchestrator | Monday 02 February 2026 02:45:31 +0000 (0:00:01.114) 0:07:00.195 ******* 2026-02-02 02:45:56.953832 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:45:56.953845 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:45:56.953859 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:45:56.953872 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:45:56.953886 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:45:56.953899 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:45:56.953913 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:45:56.953926 | orchestrator | 2026-02-02 02:45:56.953940 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-02 02:45:56.953954 | orchestrator | Monday 02 February 2026 02:45:32 +0000 (0:00:00.572) 0:07:00.767 ******* 2026-02-02 02:45:56.953967 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:56.954000 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:45:56.954014 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:45:56.954083 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:45:56.954096 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:45:56.954111 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:45:56.954131 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:45:56.954146 | orchestrator | 2026-02-02 02:45:56.954162 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-02 02:45:56.954176 | orchestrator | Monday 02 February 2026 02:45:32 +0000 (0:00:00.518) 0:07:01.285 ******* 2026-02-02 02:45:56.954191 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:56.954206 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:45:56.954220 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:45:56.954233 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:45:56.954247 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:45:56.954259 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:45:56.954273 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:45:56.954287 | orchestrator | 2026-02-02 02:45:56.954302 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-02 02:45:56.954317 | orchestrator | Monday 02 February 2026 02:45:33 +0000 (0:00:00.780) 0:07:02.065 ******* 2026-02-02 02:45:56.954332 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:56.954346 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:45:56.954360 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:45:56.954375 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:45:56.954389 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:45:56.954403 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:45:56.954418 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:45:56.954432 | orchestrator | 2026-02-02 02:45:56.954447 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-02 02:45:56.954462 | orchestrator | Monday 02 February 2026 02:45:34 +0000 (0:00:00.558) 0:07:02.624 ******* 2026-02-02 02:45:56.954477 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:56.954491 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:45:56.954518 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:45:56.954533 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:45:56.954547 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:45:56.954562 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:45:56.954577 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:45:56.954592 | orchestrator | 2026-02-02 02:45:56.954626 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-02 02:45:56.954641 | orchestrator | Monday 02 February 2026 02:45:39 +0000 (0:00:05.365) 0:07:07.990 ******* 2026-02-02 02:45:56.954678 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:45:56.954693 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:45:56.954707 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:45:56.954721 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:45:56.954734 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:45:56.954748 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:45:56.954761 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:45:56.954775 | orchestrator | 2026-02-02 02:45:56.954788 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-02 02:45:56.954802 | orchestrator | Monday 02 February 2026 02:45:40 +0000 (0:00:00.540) 0:07:08.530 ******* 2026-02-02 02:45:56.954817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:45:56.954833 | orchestrator | 2026-02-02 02:45:56.954847 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-02 02:45:56.954861 | orchestrator | Monday 02 February 2026 02:45:41 +0000 (0:00:01.095) 0:07:09.626 ******* 2026-02-02 02:45:56.954875 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:45:56.954888 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:45:56.954902 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:56.954916 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:45:56.954929 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:45:56.954943 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:45:56.954956 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:45:56.954969 | orchestrator | 2026-02-02 02:45:56.954982 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-02 02:45:56.954996 | orchestrator | Monday 02 February 2026 02:45:43 +0000 (0:00:01.866) 0:07:11.492 ******* 2026-02-02 02:45:56.955010 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:56.955023 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:45:56.955037 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:45:56.955050 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:45:56.955064 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:45:56.955077 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:45:56.955091 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:45:56.955104 | orchestrator | 2026-02-02 02:45:56.955118 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-02 02:45:56.955131 | orchestrator | Monday 02 February 2026 02:45:44 +0000 (0:00:01.080) 0:07:12.573 ******* 2026-02-02 02:45:56.955145 | orchestrator | ok: [testbed-manager] 2026-02-02 02:45:56.955158 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:45:56.955172 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:45:56.955186 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:45:56.955199 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:45:56.955212 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:45:56.955226 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:45:56.955239 | orchestrator | 2026-02-02 02:45:56.955253 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-02 02:45:56.955267 | orchestrator | Monday 02 February 2026 02:45:44 +0000 (0:00:00.865) 0:07:13.438 ******* 2026-02-02 02:45:56.955281 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 02:45:56.955296 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 02:45:56.955318 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 02:45:56.955332 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 02:45:56.955352 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 02:45:56.955367 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 02:45:56.955380 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-02 02:45:56.955394 | orchestrator | 2026-02-02 02:45:56.955407 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-02 02:45:56.955420 | orchestrator | Monday 02 February 2026 02:45:46 +0000 (0:00:01.886) 0:07:15.325 ******* 2026-02-02 02:45:56.955434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:45:56.955448 | orchestrator | 2026-02-02 02:45:56.955461 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-02 02:45:56.955474 | orchestrator | Monday 02 February 2026 02:45:47 +0000 (0:00:00.914) 0:07:16.240 ******* 2026-02-02 02:45:56.955487 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:45:56.955500 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:45:56.955512 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:45:56.955524 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:45:56.955537 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:45:56.955551 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:45:56.955565 | orchestrator | changed: [testbed-manager] 2026-02-02 02:45:56.955579 | orchestrator | 2026-02-02 02:45:56.955601 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-02 02:46:27.915198 | orchestrator | Monday 02 February 2026 02:45:56 +0000 (0:00:09.170) 0:07:25.410 ******* 2026-02-02 02:46:27.915312 | orchestrator | ok: [testbed-manager] 2026-02-02 02:46:27.915329 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:46:27.915342 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:46:27.915352 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:46:27.915363 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:46:27.915373 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:46:27.915383 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:46:27.915395 | orchestrator | 2026-02-02 02:46:27.915402 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-02 02:46:27.915409 | orchestrator | Monday 02 February 2026 02:45:58 +0000 (0:00:02.014) 0:07:27.425 ******* 2026-02-02 02:46:27.915416 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:46:27.915422 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:46:27.915428 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:46:27.915435 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:46:27.915441 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:46:27.915447 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:46:27.915454 | orchestrator | 2026-02-02 02:46:27.915460 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-02 02:46:27.915466 | orchestrator | Monday 02 February 2026 02:46:00 +0000 (0:00:01.276) 0:07:28.702 ******* 2026-02-02 02:46:27.915473 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:46:27.915480 | orchestrator | changed: [testbed-manager] 2026-02-02 02:46:27.915486 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:46:27.915493 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:46:27.915499 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:46:27.915526 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:46:27.915533 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:46:27.915539 | orchestrator | 2026-02-02 02:46:27.915546 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-02 02:46:27.915554 | orchestrator | 2026-02-02 02:46:27.915565 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-02 02:46:27.915575 | orchestrator | Monday 02 February 2026 02:46:01 +0000 (0:00:01.263) 0:07:29.965 ******* 2026-02-02 02:46:27.915586 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:46:27.915596 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:46:27.915606 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:46:27.915616 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:46:27.915626 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:46:27.915637 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:46:27.915647 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:46:27.915657 | orchestrator | 2026-02-02 02:46:27.915687 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-02 02:46:27.915696 | orchestrator | 2026-02-02 02:46:27.915707 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-02 02:46:27.915717 | orchestrator | Monday 02 February 2026 02:46:02 +0000 (0:00:00.807) 0:07:30.773 ******* 2026-02-02 02:46:27.915729 | orchestrator | changed: [testbed-manager] 2026-02-02 02:46:27.915739 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:46:27.915749 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:46:27.915759 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:46:27.915769 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:46:27.915779 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:46:27.915790 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:46:27.915799 | orchestrator | 2026-02-02 02:46:27.915809 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-02 02:46:27.915819 | orchestrator | Monday 02 February 2026 02:46:03 +0000 (0:00:01.247) 0:07:32.021 ******* 2026-02-02 02:46:27.915829 | orchestrator | ok: [testbed-manager] 2026-02-02 02:46:27.915840 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:46:27.915851 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:46:27.915862 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:46:27.915873 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:46:27.915884 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:46:27.915895 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:46:27.915905 | orchestrator | 2026-02-02 02:46:27.915917 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-02 02:46:27.915928 | orchestrator | Monday 02 February 2026 02:46:05 +0000 (0:00:01.467) 0:07:33.488 ******* 2026-02-02 02:46:27.915939 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:46:27.915950 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:46:27.915958 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:46:27.915965 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:46:27.915971 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:46:27.915991 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:46:27.915997 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:46:27.916003 | orchestrator | 2026-02-02 02:46:27.916009 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-02 02:46:27.916016 | orchestrator | Monday 02 February 2026 02:46:05 +0000 (0:00:00.546) 0:07:34.034 ******* 2026-02-02 02:46:27.916023 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:46:27.916032 | orchestrator | 2026-02-02 02:46:27.916038 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-02 02:46:27.916044 | orchestrator | Monday 02 February 2026 02:46:06 +0000 (0:00:01.035) 0:07:35.070 ******* 2026-02-02 02:46:27.916052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:46:27.916071 | orchestrator | 2026-02-02 02:46:27.916077 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-02 02:46:27.916083 | orchestrator | Monday 02 February 2026 02:46:07 +0000 (0:00:00.882) 0:07:35.953 ******* 2026-02-02 02:46:27.916090 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:46:27.916096 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:46:27.916102 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:46:27.916108 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:46:27.916114 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:46:27.916120 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:46:27.916126 | orchestrator | changed: [testbed-manager] 2026-02-02 02:46:27.916133 | orchestrator | 2026-02-02 02:46:27.916155 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-02 02:46:27.916162 | orchestrator | Monday 02 February 2026 02:46:16 +0000 (0:00:08.849) 0:07:44.803 ******* 2026-02-02 02:46:27.916168 | orchestrator | changed: [testbed-manager] 2026-02-02 02:46:27.916174 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:46:27.916180 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:46:27.916187 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:46:27.916193 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:46:27.916198 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:46:27.916205 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:46:27.916211 | orchestrator | 2026-02-02 02:46:27.916217 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-02 02:46:27.916223 | orchestrator | Monday 02 February 2026 02:46:17 +0000 (0:00:00.785) 0:07:45.588 ******* 2026-02-02 02:46:27.916229 | orchestrator | changed: [testbed-manager] 2026-02-02 02:46:27.916235 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:46:27.916241 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:46:27.916247 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:46:27.916253 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:46:27.916259 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:46:27.916265 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:46:27.916271 | orchestrator | 2026-02-02 02:46:27.916277 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-02 02:46:27.916283 | orchestrator | Monday 02 February 2026 02:46:18 +0000 (0:00:01.353) 0:07:46.942 ******* 2026-02-02 02:46:27.916290 | orchestrator | changed: [testbed-manager] 2026-02-02 02:46:27.916296 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:46:27.916302 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:46:27.916308 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:46:27.916314 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:46:27.916320 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:46:27.916326 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:46:27.916332 | orchestrator | 2026-02-02 02:46:27.916338 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-02 02:46:27.916344 | orchestrator | Monday 02 February 2026 02:46:20 +0000 (0:00:01.927) 0:07:48.870 ******* 2026-02-02 02:46:27.916350 | orchestrator | changed: [testbed-manager] 2026-02-02 02:46:27.916356 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:46:27.916362 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:46:27.916368 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:46:27.916374 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:46:27.916380 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:46:27.916386 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:46:27.916392 | orchestrator | 2026-02-02 02:46:27.916399 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-02 02:46:27.916405 | orchestrator | Monday 02 February 2026 02:46:21 +0000 (0:00:01.248) 0:07:50.119 ******* 2026-02-02 02:46:27.916411 | orchestrator | changed: [testbed-manager] 2026-02-02 02:46:27.916417 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:46:27.916428 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:46:27.916434 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:46:27.916440 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:46:27.916446 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:46:27.916452 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:46:27.916458 | orchestrator | 2026-02-02 02:46:27.916464 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-02 02:46:27.916471 | orchestrator | 2026-02-02 02:46:27.916477 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-02 02:46:27.916483 | orchestrator | Monday 02 February 2026 02:46:22 +0000 (0:00:01.108) 0:07:51.228 ******* 2026-02-02 02:46:27.916489 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:46:27.916496 | orchestrator | 2026-02-02 02:46:27.916502 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-02 02:46:27.916508 | orchestrator | Monday 02 February 2026 02:46:23 +0000 (0:00:00.864) 0:07:52.092 ******* 2026-02-02 02:46:27.916514 | orchestrator | ok: [testbed-manager] 2026-02-02 02:46:27.916520 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:46:27.916526 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:46:27.916532 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:46:27.916538 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:46:27.916544 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:46:27.916554 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:46:27.916560 | orchestrator | 2026-02-02 02:46:27.916567 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-02 02:46:27.916573 | orchestrator | Monday 02 February 2026 02:46:24 +0000 (0:00:01.165) 0:07:53.258 ******* 2026-02-02 02:46:27.916602 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:46:27.916608 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:46:27.916614 | orchestrator | changed: [testbed-manager] 2026-02-02 02:46:27.916620 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:46:27.916627 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:46:27.916633 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:46:27.916639 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:46:27.916645 | orchestrator | 2026-02-02 02:46:27.916651 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-02 02:46:27.916657 | orchestrator | Monday 02 February 2026 02:46:26 +0000 (0:00:01.245) 0:07:54.504 ******* 2026-02-02 02:46:27.916695 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:46:27.916702 | orchestrator | 2026-02-02 02:46:27.916709 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-02 02:46:27.916715 | orchestrator | Monday 02 February 2026 02:46:27 +0000 (0:00:01.028) 0:07:55.533 ******* 2026-02-02 02:46:27.916722 | orchestrator | ok: [testbed-manager] 2026-02-02 02:46:27.916728 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:46:27.916734 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:46:27.916740 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:46:27.916746 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:46:27.916753 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:46:27.916759 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:46:27.916765 | orchestrator | 2026-02-02 02:46:27.916778 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-02 02:46:29.576314 | orchestrator | Monday 02 February 2026 02:46:27 +0000 (0:00:00.841) 0:07:56.374 ******* 2026-02-02 02:46:29.576410 | orchestrator | changed: [testbed-manager] 2026-02-02 02:46:29.576426 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:46:29.576438 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:46:29.576449 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:46:29.576459 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:46:29.576470 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:46:29.576481 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:46:29.576517 | orchestrator | 2026-02-02 02:46:29.576529 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:46:29.576541 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-02 02:46:29.576554 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-02 02:46:29.576564 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-02 02:46:29.576575 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-02 02:46:29.576587 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-02 02:46:29.576605 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-02 02:46:29.576622 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-02 02:46:29.576641 | orchestrator | 2026-02-02 02:46:29.576659 | orchestrator | 2026-02-02 02:46:29.576773 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:46:29.576790 | orchestrator | Monday 02 February 2026 02:46:29 +0000 (0:00:01.116) 0:07:57.490 ******* 2026-02-02 02:46:29.576801 | orchestrator | =============================================================================== 2026-02-02 02:46:29.576812 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.28s 2026-02-02 02:46:29.576822 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.66s 2026-02-02 02:46:29.576833 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.00s 2026-02-02 02:46:29.576844 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.07s 2026-02-02 02:46:29.576854 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.31s 2026-02-02 02:46:29.576868 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.15s 2026-02-02 02:46:29.576881 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.96s 2026-02-02 02:46:29.576900 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.24s 2026-02-02 02:46:29.576919 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.17s 2026-02-02 02:46:29.576939 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.85s 2026-02-02 02:46:29.576959 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.75s 2026-02-02 02:46:29.576979 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.74s 2026-02-02 02:46:29.576998 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.57s 2026-02-02 02:46:29.577030 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.55s 2026-02-02 02:46:29.577044 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.22s 2026-02-02 02:46:29.577057 | orchestrator | osism.services.docker : Add repository ---------------------------------- 6.62s 2026-02-02 02:46:29.577070 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.22s 2026-02-02 02:46:29.577082 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.94s 2026-02-02 02:46:29.577098 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.61s 2026-02-02 02:46:29.577117 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.37s 2026-02-02 02:46:29.820308 | orchestrator | + osism apply fail2ban 2026-02-02 02:46:42.566369 | orchestrator | 2026-02-02 02:46:42 | INFO  | Task d8fd4b9f-b0c7-466a-a109-b0c0653f3044 (fail2ban) was prepared for execution. 2026-02-02 02:46:42.566466 | orchestrator | 2026-02-02 02:46:42 | INFO  | It takes a moment until task d8fd4b9f-b0c7-466a-a109-b0c0653f3044 (fail2ban) has been started and output is visible here. 2026-02-02 02:47:05.279002 | orchestrator | 2026-02-02 02:47:05.279097 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-02 02:47:05.279109 | orchestrator | 2026-02-02 02:47:05.279116 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-02 02:47:05.279124 | orchestrator | Monday 02 February 2026 02:46:47 +0000 (0:00:00.280) 0:00:00.280 ******* 2026-02-02 02:47:05.279133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 02:47:05.279146 | orchestrator | 2026-02-02 02:47:05.279160 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-02 02:47:05.279176 | orchestrator | Monday 02 February 2026 02:46:48 +0000 (0:00:01.184) 0:00:01.465 ******* 2026-02-02 02:47:05.279189 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:47:05.279202 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:47:05.279214 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:47:05.279226 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:47:05.279237 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:47:05.279248 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:47:05.279259 | orchestrator | changed: [testbed-manager] 2026-02-02 02:47:05.279271 | orchestrator | 2026-02-02 02:47:05.279284 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-02 02:47:05.279297 | orchestrator | Monday 02 February 2026 02:47:00 +0000 (0:00:11.519) 0:00:12.984 ******* 2026-02-02 02:47:05.279309 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:47:05.279323 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:47:05.279335 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:47:05.279347 | orchestrator | changed: [testbed-manager] 2026-02-02 02:47:05.279355 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:47:05.279361 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:47:05.279368 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:47:05.279378 | orchestrator | 2026-02-02 02:47:05.279389 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-02 02:47:05.279400 | orchestrator | Monday 02 February 2026 02:47:01 +0000 (0:00:01.458) 0:00:14.442 ******* 2026-02-02 02:47:05.279411 | orchestrator | ok: [testbed-manager] 2026-02-02 02:47:05.279423 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:47:05.279434 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:47:05.279444 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:47:05.279455 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:47:05.279466 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:47:05.279478 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:47:05.279489 | orchestrator | 2026-02-02 02:47:05.279500 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-02 02:47:05.279511 | orchestrator | Monday 02 February 2026 02:47:03 +0000 (0:00:01.457) 0:00:15.900 ******* 2026-02-02 02:47:05.279523 | orchestrator | changed: [testbed-manager] 2026-02-02 02:47:05.279535 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:47:05.279547 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:47:05.279559 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:47:05.279572 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:47:05.279580 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:47:05.279589 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:47:05.279597 | orchestrator | 2026-02-02 02:47:05.279606 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:47:05.279614 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:47:05.279647 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:47:05.279656 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:47:05.279665 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:47:05.279699 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:47:05.279706 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:47:05.279713 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:47:05.279720 | orchestrator | 2026-02-02 02:47:05.279727 | orchestrator | 2026-02-02 02:47:05.279734 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:47:05.279740 | orchestrator | Monday 02 February 2026 02:47:04 +0000 (0:00:01.605) 0:00:17.505 ******* 2026-02-02 02:47:05.279747 | orchestrator | =============================================================================== 2026-02-02 02:47:05.279754 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.52s 2026-02-02 02:47:05.279761 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.61s 2026-02-02 02:47:05.279767 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.46s 2026-02-02 02:47:05.279774 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.46s 2026-02-02 02:47:05.279781 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.18s 2026-02-02 02:47:05.657569 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-02 02:47:05.657768 | orchestrator | + osism apply network 2026-02-02 02:47:17.732458 | orchestrator | 2026-02-02 02:47:17 | INFO  | Task 6b4d6df8-7655-4bb8-bf52-aba914d1c070 (network) was prepared for execution. 2026-02-02 02:47:17.732570 | orchestrator | 2026-02-02 02:47:17 | INFO  | It takes a moment until task 6b4d6df8-7655-4bb8-bf52-aba914d1c070 (network) has been started and output is visible here. 2026-02-02 02:47:47.456632 | orchestrator | 2026-02-02 02:47:47.456792 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-02 02:47:47.456806 | orchestrator | 2026-02-02 02:47:47.456815 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-02 02:47:47.456823 | orchestrator | Monday 02 February 2026 02:47:22 +0000 (0:00:00.283) 0:00:00.283 ******* 2026-02-02 02:47:47.456831 | orchestrator | ok: [testbed-manager] 2026-02-02 02:47:47.456840 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:47:47.456847 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:47:47.456854 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:47:47.456862 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:47:47.456869 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:47:47.456876 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:47:47.456883 | orchestrator | 2026-02-02 02:47:47.456891 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-02 02:47:47.456898 | orchestrator | Monday 02 February 2026 02:47:23 +0000 (0:00:00.759) 0:00:01.042 ******* 2026-02-02 02:47:47.456906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 02:47:47.456916 | orchestrator | 2026-02-02 02:47:47.456923 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-02 02:47:47.456958 | orchestrator | Monday 02 February 2026 02:47:24 +0000 (0:00:01.265) 0:00:02.308 ******* 2026-02-02 02:47:47.456973 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:47:47.456986 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:47:47.456999 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:47:47.457007 | orchestrator | ok: [testbed-manager] 2026-02-02 02:47:47.457014 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:47:47.457021 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:47:47.457028 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:47:47.457039 | orchestrator | 2026-02-02 02:47:47.457050 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-02 02:47:47.457062 | orchestrator | Monday 02 February 2026 02:47:26 +0000 (0:00:01.950) 0:00:04.258 ******* 2026-02-02 02:47:47.457074 | orchestrator | ok: [testbed-manager] 2026-02-02 02:47:47.457085 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:47:47.457098 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:47:47.457110 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:47:47.457141 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:47:47.457155 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:47:47.457165 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:47:47.457172 | orchestrator | 2026-02-02 02:47:47.457179 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-02 02:47:47.457188 | orchestrator | Monday 02 February 2026 02:47:27 +0000 (0:00:01.712) 0:00:05.971 ******* 2026-02-02 02:47:47.457197 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-02 02:47:47.457207 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-02 02:47:47.457216 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-02 02:47:47.457225 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-02 02:47:47.457237 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-02 02:47:47.457252 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-02 02:47:47.457269 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-02 02:47:47.457281 | orchestrator | 2026-02-02 02:47:47.457314 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-02 02:47:47.457327 | orchestrator | Monday 02 February 2026 02:47:28 +0000 (0:00:00.942) 0:00:06.913 ******* 2026-02-02 02:47:47.457338 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 02:47:47.457352 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 02:47:47.457364 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-02 02:47:47.457376 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 02:47:47.457389 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-02 02:47:47.457401 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 02:47:47.457414 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 02:47:47.457427 | orchestrator | 2026-02-02 02:47:47.457438 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-02 02:47:47.457446 | orchestrator | Monday 02 February 2026 02:47:32 +0000 (0:00:03.670) 0:00:10.584 ******* 2026-02-02 02:47:47.457455 | orchestrator | changed: [testbed-manager] 2026-02-02 02:47:47.457463 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:47:47.457472 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:47:47.457480 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:47:47.457492 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:47:47.457500 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:47:47.457508 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:47:47.457516 | orchestrator | 2026-02-02 02:47:47.457525 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-02 02:47:47.457533 | orchestrator | Monday 02 February 2026 02:47:34 +0000 (0:00:01.598) 0:00:12.183 ******* 2026-02-02 02:47:47.457542 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 02:47:47.457550 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 02:47:47.457558 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-02 02:47:47.457567 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-02 02:47:47.457584 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 02:47:47.457591 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 02:47:47.457598 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 02:47:47.457605 | orchestrator | 2026-02-02 02:47:47.457612 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-02 02:47:47.457620 | orchestrator | Monday 02 February 2026 02:47:36 +0000 (0:00:02.245) 0:00:14.429 ******* 2026-02-02 02:47:47.457627 | orchestrator | ok: [testbed-manager] 2026-02-02 02:47:47.457634 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:47:47.457641 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:47:47.457648 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:47:47.457656 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:47:47.457663 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:47:47.457670 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:47:47.457702 | orchestrator | 2026-02-02 02:47:47.457710 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-02 02:47:47.457735 | orchestrator | Monday 02 February 2026 02:47:37 +0000 (0:00:01.127) 0:00:15.557 ******* 2026-02-02 02:47:47.457743 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:47:47.457750 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:47:47.457758 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:47:47.457765 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:47:47.457772 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:47:47.457779 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:47:47.457786 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:47:47.457793 | orchestrator | 2026-02-02 02:47:47.457800 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-02 02:47:47.457807 | orchestrator | Monday 02 February 2026 02:47:38 +0000 (0:00:00.696) 0:00:16.254 ******* 2026-02-02 02:47:47.457825 | orchestrator | ok: [testbed-manager] 2026-02-02 02:47:47.457849 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:47:47.457856 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:47:47.457871 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:47:47.457879 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:47:47.457886 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:47:47.457893 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:47:47.457900 | orchestrator | 2026-02-02 02:47:47.457907 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-02 02:47:47.457915 | orchestrator | Monday 02 February 2026 02:47:40 +0000 (0:00:02.122) 0:00:18.377 ******* 2026-02-02 02:47:47.457922 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:47:47.457929 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:47:47.457936 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:47:47.457944 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:47:47.457951 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:47:47.457958 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:47:47.457966 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-02 02:47:47.457975 | orchestrator | 2026-02-02 02:47:47.457982 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-02 02:47:47.457990 | orchestrator | Monday 02 February 2026 02:47:41 +0000 (0:00:00.954) 0:00:19.331 ******* 2026-02-02 02:47:47.457997 | orchestrator | ok: [testbed-manager] 2026-02-02 02:47:47.458004 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:47:47.458011 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:47:47.458069 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:47:47.458076 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:47:47.458083 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:47:47.458091 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:47:47.458098 | orchestrator | 2026-02-02 02:47:47.458105 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-02 02:47:47.458112 | orchestrator | Monday 02 February 2026 02:47:42 +0000 (0:00:01.621) 0:00:20.953 ******* 2026-02-02 02:47:47.458120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 02:47:47.458137 | orchestrator | 2026-02-02 02:47:47.458145 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-02 02:47:47.458152 | orchestrator | Monday 02 February 2026 02:47:44 +0000 (0:00:01.330) 0:00:22.283 ******* 2026-02-02 02:47:47.458159 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:47:47.458166 | orchestrator | ok: [testbed-manager] 2026-02-02 02:47:47.458173 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:47:47.458181 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:47:47.458188 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:47:47.458195 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:47:47.458202 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:47:47.458209 | orchestrator | 2026-02-02 02:47:47.458216 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-02 02:47:47.458224 | orchestrator | Monday 02 February 2026 02:47:45 +0000 (0:00:01.114) 0:00:23.398 ******* 2026-02-02 02:47:47.458231 | orchestrator | ok: [testbed-manager] 2026-02-02 02:47:47.458238 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:47:47.458245 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:47:47.458252 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:47:47.458259 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:47:47.458266 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:47:47.458273 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:47:47.458280 | orchestrator | 2026-02-02 02:47:47.458288 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-02 02:47:47.458295 | orchestrator | Monday 02 February 2026 02:47:46 +0000 (0:00:00.641) 0:00:24.040 ******* 2026-02-02 02:47:47.458306 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 02:47:47.458314 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 02:47:47.458321 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 02:47:47.458328 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 02:47:47.458336 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 02:47:47.458343 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 02:47:47.458350 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 02:47:47.458357 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 02:47:47.458364 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 02:47:47.458371 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 02:47:47.458378 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 02:47:47.458386 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 02:47:47.458393 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-02 02:47:47.458403 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-02 02:47:47.458416 | orchestrator | 2026-02-02 02:47:47.458437 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-02 02:48:05.369013 | orchestrator | Monday 02 February 2026 02:47:47 +0000 (0:00:01.378) 0:00:25.419 ******* 2026-02-02 02:48:05.369099 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:48:05.369120 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:48:05.369136 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:48:05.369151 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:48:05.369167 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:48:05.369182 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:48:05.369198 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:48:05.369214 | orchestrator | 2026-02-02 02:48:05.369247 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-02 02:48:05.369256 | orchestrator | Monday 02 February 2026 02:47:48 +0000 (0:00:00.667) 0:00:26.086 ******* 2026-02-02 02:48:05.369265 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2026-02-02 02:48:05.369274 | orchestrator | 2026-02-02 02:48:05.369282 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-02 02:48:05.369290 | orchestrator | Monday 02 February 2026 02:47:52 +0000 (0:00:04.830) 0:00:30.916 ******* 2026-02-02 02:48:05.369300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369317 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:05.369342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369358 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:05.369388 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:05.369396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:05.369417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:05.369432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:05.369440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:05.369448 | orchestrator | 2026-02-02 02:48:05.369457 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-02 02:48:05.369465 | orchestrator | Monday 02 February 2026 02:47:58 +0000 (0:00:05.966) 0:00:36.883 ******* 2026-02-02 02:48:05.369473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369497 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369505 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-02 02:48:05.369530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:05.369546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:05.369561 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:05.369576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:05.369600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:05.369625 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:10.946159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-02 02:48:10.946260 | orchestrator | 2026-02-02 02:48:10.946278 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-02 02:48:10.946291 | orchestrator | Monday 02 February 2026 02:48:05 +0000 (0:00:06.445) 0:00:43.328 ******* 2026-02-02 02:48:10.946303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 02:48:10.946315 | orchestrator | 2026-02-02 02:48:10.946326 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-02 02:48:10.946337 | orchestrator | Monday 02 February 2026 02:48:06 +0000 (0:00:01.191) 0:00:44.520 ******* 2026-02-02 02:48:10.946348 | orchestrator | ok: [testbed-manager] 2026-02-02 02:48:10.946360 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:48:10.946371 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:48:10.946381 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:48:10.946392 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:48:10.946403 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:48:10.946413 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:48:10.946424 | orchestrator | 2026-02-02 02:48:10.946435 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-02 02:48:10.946446 | orchestrator | Monday 02 February 2026 02:48:07 +0000 (0:00:01.083) 0:00:45.604 ******* 2026-02-02 02:48:10.946457 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 02:48:10.946468 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 02:48:10.946479 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 02:48:10.946490 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 02:48:10.946501 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 02:48:10.946521 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 02:48:10.946536 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 02:48:10.946547 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 02:48:10.946558 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:48:10.946569 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 02:48:10.946580 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 02:48:10.946591 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 02:48:10.946602 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 02:48:10.946613 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:48:10.946645 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 02:48:10.946657 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 02:48:10.946667 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 02:48:10.946704 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 02:48:10.946723 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:48:10.946756 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 02:48:10.946775 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 02:48:10.946790 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 02:48:10.946801 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 02:48:10.946812 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:48:10.946822 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 02:48:10.946833 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 02:48:10.946844 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 02:48:10.946855 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 02:48:10.946865 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:48:10.946876 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:48:10.946887 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-02 02:48:10.946898 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-02 02:48:10.946908 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-02 02:48:10.946919 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-02 02:48:10.946929 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:48:10.946940 | orchestrator | 2026-02-02 02:48:10.946951 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-02 02:48:10.946978 | orchestrator | Monday 02 February 2026 02:48:09 +0000 (0:00:01.789) 0:00:47.394 ******* 2026-02-02 02:48:10.946990 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:48:10.947000 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:48:10.947011 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:48:10.947022 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:48:10.947032 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:48:10.947043 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:48:10.947053 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:48:10.947064 | orchestrator | 2026-02-02 02:48:10.947074 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-02 02:48:10.947085 | orchestrator | Monday 02 February 2026 02:48:10 +0000 (0:00:00.618) 0:00:48.012 ******* 2026-02-02 02:48:10.947096 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:48:10.947122 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:48:10.947144 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:48:10.947155 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:48:10.947166 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:48:10.947176 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:48:10.947187 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:48:10.947198 | orchestrator | 2026-02-02 02:48:10.947209 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:48:10.947220 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 02:48:10.947232 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 02:48:10.947252 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 02:48:10.947264 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 02:48:10.947274 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 02:48:10.947285 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 02:48:10.947296 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 02:48:10.947306 | orchestrator | 2026-02-02 02:48:10.947317 | orchestrator | 2026-02-02 02:48:10.947328 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:48:10.947339 | orchestrator | Monday 02 February 2026 02:48:10 +0000 (0:00:00.615) 0:00:48.628 ******* 2026-02-02 02:48:10.947350 | orchestrator | =============================================================================== 2026-02-02 02:48:10.947360 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.45s 2026-02-02 02:48:10.947371 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.97s 2026-02-02 02:48:10.947381 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.83s 2026-02-02 02:48:10.947392 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.67s 2026-02-02 02:48:10.947403 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.25s 2026-02-02 02:48:10.947417 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.12s 2026-02-02 02:48:10.947436 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.95s 2026-02-02 02:48:10.947468 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.79s 2026-02-02 02:48:10.947495 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.71s 2026-02-02 02:48:10.947515 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.62s 2026-02-02 02:48:10.947534 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.60s 2026-02-02 02:48:10.947554 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.38s 2026-02-02 02:48:10.947575 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.33s 2026-02-02 02:48:10.947595 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.27s 2026-02-02 02:48:10.947616 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.19s 2026-02-02 02:48:10.947638 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2026-02-02 02:48:10.947659 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.11s 2026-02-02 02:48:10.947729 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.08s 2026-02-02 02:48:10.947748 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.95s 2026-02-02 02:48:10.947767 | orchestrator | osism.commons.network : Create required directories --------------------- 0.94s 2026-02-02 02:48:11.188042 | orchestrator | + osism apply wireguard 2026-02-02 02:48:23.170107 | orchestrator | 2026-02-02 02:48:23 | INFO  | Task b247613b-b587-4a66-8295-6dbedaa22d51 (wireguard) was prepared for execution. 2026-02-02 02:48:23.170232 | orchestrator | 2026-02-02 02:48:23 | INFO  | It takes a moment until task b247613b-b587-4a66-8295-6dbedaa22d51 (wireguard) has been started and output is visible here. 2026-02-02 02:48:43.032889 | orchestrator | 2026-02-02 02:48:43.033010 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-02 02:48:43.033027 | orchestrator | 2026-02-02 02:48:43.033039 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-02 02:48:43.033051 | orchestrator | Monday 02 February 2026 02:48:27 +0000 (0:00:00.274) 0:00:00.274 ******* 2026-02-02 02:48:43.033062 | orchestrator | ok: [testbed-manager] 2026-02-02 02:48:43.033074 | orchestrator | 2026-02-02 02:48:43.033085 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-02 02:48:43.033096 | orchestrator | Monday 02 February 2026 02:48:29 +0000 (0:00:01.705) 0:00:01.979 ******* 2026-02-02 02:48:43.033106 | orchestrator | changed: [testbed-manager] 2026-02-02 02:48:43.033121 | orchestrator | 2026-02-02 02:48:43.033133 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-02 02:48:43.033144 | orchestrator | Monday 02 February 2026 02:48:36 +0000 (0:00:06.571) 0:00:08.551 ******* 2026-02-02 02:48:43.033155 | orchestrator | changed: [testbed-manager] 2026-02-02 02:48:43.033166 | orchestrator | 2026-02-02 02:48:43.033176 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-02 02:48:43.033187 | orchestrator | Monday 02 February 2026 02:48:36 +0000 (0:00:00.525) 0:00:09.076 ******* 2026-02-02 02:48:43.033198 | orchestrator | changed: [testbed-manager] 2026-02-02 02:48:43.033209 | orchestrator | 2026-02-02 02:48:43.033220 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-02 02:48:43.033231 | orchestrator | Monday 02 February 2026 02:48:36 +0000 (0:00:00.399) 0:00:09.476 ******* 2026-02-02 02:48:43.033247 | orchestrator | ok: [testbed-manager] 2026-02-02 02:48:43.033266 | orchestrator | 2026-02-02 02:48:43.033285 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-02 02:48:43.033303 | orchestrator | Monday 02 February 2026 02:48:37 +0000 (0:00:00.581) 0:00:10.057 ******* 2026-02-02 02:48:43.033321 | orchestrator | ok: [testbed-manager] 2026-02-02 02:48:43.033341 | orchestrator | 2026-02-02 02:48:43.033361 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-02 02:48:43.033380 | orchestrator | Monday 02 February 2026 02:48:37 +0000 (0:00:00.399) 0:00:10.457 ******* 2026-02-02 02:48:43.033398 | orchestrator | ok: [testbed-manager] 2026-02-02 02:48:43.033409 | orchestrator | 2026-02-02 02:48:43.033420 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-02 02:48:43.033431 | orchestrator | Monday 02 February 2026 02:48:38 +0000 (0:00:00.403) 0:00:10.861 ******* 2026-02-02 02:48:43.033441 | orchestrator | changed: [testbed-manager] 2026-02-02 02:48:43.033452 | orchestrator | 2026-02-02 02:48:43.033463 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-02 02:48:43.033474 | orchestrator | Monday 02 February 2026 02:48:39 +0000 (0:00:01.063) 0:00:11.924 ******* 2026-02-02 02:48:43.033484 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-02 02:48:43.033495 | orchestrator | changed: [testbed-manager] 2026-02-02 02:48:43.033506 | orchestrator | 2026-02-02 02:48:43.033517 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-02 02:48:43.033527 | orchestrator | Monday 02 February 2026 02:48:40 +0000 (0:00:00.850) 0:00:12.774 ******* 2026-02-02 02:48:43.033538 | orchestrator | changed: [testbed-manager] 2026-02-02 02:48:43.033549 | orchestrator | 2026-02-02 02:48:43.033560 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-02 02:48:43.033571 | orchestrator | Monday 02 February 2026 02:48:41 +0000 (0:00:01.600) 0:00:14.374 ******* 2026-02-02 02:48:43.033582 | orchestrator | changed: [testbed-manager] 2026-02-02 02:48:43.033592 | orchestrator | 2026-02-02 02:48:43.033603 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:48:43.033614 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:48:43.033626 | orchestrator | 2026-02-02 02:48:43.033636 | orchestrator | 2026-02-02 02:48:43.033647 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:48:43.033668 | orchestrator | Monday 02 February 2026 02:48:42 +0000 (0:00:00.897) 0:00:15.272 ******* 2026-02-02 02:48:43.033679 | orchestrator | =============================================================================== 2026-02-02 02:48:43.033733 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.57s 2026-02-02 02:48:43.033744 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.71s 2026-02-02 02:48:43.033755 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.60s 2026-02-02 02:48:43.033766 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.06s 2026-02-02 02:48:43.033777 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2026-02-02 02:48:43.033788 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.85s 2026-02-02 02:48:43.033798 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.58s 2026-02-02 02:48:43.033809 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2026-02-02 02:48:43.033820 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2026-02-02 02:48:43.033830 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2026-02-02 02:48:43.033841 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-02-02 02:48:43.260324 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-02 02:48:43.295940 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-02 02:48:43.296037 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-02 02:48:43.371451 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 198 0 --:--:-- --:--:-- --:--:-- 200 2026-02-02 02:48:43.385170 | orchestrator | + osism apply --environment custom workarounds 2026-02-02 02:48:45.367221 | orchestrator | 2026-02-02 02:48:45 | INFO  | Trying to run play workarounds in environment custom 2026-02-02 02:48:55.528294 | orchestrator | 2026-02-02 02:48:55 | INFO  | Task e9615b04-a977-4ff3-b7e4-bfd8e4b394dd (workarounds) was prepared for execution. 2026-02-02 02:48:55.528433 | orchestrator | 2026-02-02 02:48:55 | INFO  | It takes a moment until task e9615b04-a977-4ff3-b7e4-bfd8e4b394dd (workarounds) has been started and output is visible here. 2026-02-02 02:49:21.255495 | orchestrator | 2026-02-02 02:49:21.255614 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 02:49:21.255630 | orchestrator | 2026-02-02 02:49:21.255642 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-02 02:49:21.255654 | orchestrator | Monday 02 February 2026 02:48:59 +0000 (0:00:00.132) 0:00:00.132 ******* 2026-02-02 02:49:21.255665 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-02 02:49:21.255677 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-02 02:49:21.255688 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-02 02:49:21.255729 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-02 02:49:21.255740 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-02 02:49:21.255751 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-02 02:49:21.255762 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-02 02:49:21.255773 | orchestrator | 2026-02-02 02:49:21.255784 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-02 02:49:21.255795 | orchestrator | 2026-02-02 02:49:21.255806 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-02 02:49:21.255816 | orchestrator | Monday 02 February 2026 02:49:00 +0000 (0:00:00.858) 0:00:00.991 ******* 2026-02-02 02:49:21.255827 | orchestrator | ok: [testbed-manager] 2026-02-02 02:49:21.255865 | orchestrator | 2026-02-02 02:49:21.255876 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-02 02:49:21.255887 | orchestrator | 2026-02-02 02:49:21.255898 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-02 02:49:21.255909 | orchestrator | Monday 02 February 2026 02:49:03 +0000 (0:00:02.646) 0:00:03.637 ******* 2026-02-02 02:49:21.255920 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:49:21.255931 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:49:21.255942 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:49:21.255952 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:49:21.255963 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:49:21.255973 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:49:21.255984 | orchestrator | 2026-02-02 02:49:21.255995 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-02 02:49:21.256005 | orchestrator | 2026-02-02 02:49:21.256016 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-02 02:49:21.256027 | orchestrator | Monday 02 February 2026 02:49:05 +0000 (0:00:01.736) 0:00:05.373 ******* 2026-02-02 02:49:21.256038 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-02 02:49:21.256050 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-02 02:49:21.256061 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-02 02:49:21.256072 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-02 02:49:21.256082 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-02 02:49:21.256107 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-02 02:49:21.256118 | orchestrator | 2026-02-02 02:49:21.256129 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-02 02:49:21.256140 | orchestrator | Monday 02 February 2026 02:49:06 +0000 (0:00:01.448) 0:00:06.822 ******* 2026-02-02 02:49:21.256150 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:49:21.256162 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:49:21.256172 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:49:21.256183 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:49:21.256195 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:49:21.256214 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:49:21.256232 | orchestrator | 2026-02-02 02:49:21.256248 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-02 02:49:21.256267 | orchestrator | Monday 02 February 2026 02:49:10 +0000 (0:00:03.514) 0:00:10.336 ******* 2026-02-02 02:49:21.256285 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:49:21.256304 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:49:21.256323 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:49:21.256334 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:49:21.256345 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:49:21.256356 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:49:21.256366 | orchestrator | 2026-02-02 02:49:21.256378 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-02 02:49:21.256388 | orchestrator | 2026-02-02 02:49:21.256399 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-02 02:49:21.256410 | orchestrator | Monday 02 February 2026 02:49:10 +0000 (0:00:00.756) 0:00:11.093 ******* 2026-02-02 02:49:21.256421 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:49:21.256432 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:49:21.256442 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:49:21.256453 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:49:21.256464 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:49:21.256474 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:49:21.256494 | orchestrator | changed: [testbed-manager] 2026-02-02 02:49:21.256505 | orchestrator | 2026-02-02 02:49:21.256516 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-02 02:49:21.256526 | orchestrator | Monday 02 February 2026 02:49:12 +0000 (0:00:01.573) 0:00:12.666 ******* 2026-02-02 02:49:21.256537 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:49:21.256548 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:49:21.256559 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:49:21.256570 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:49:21.256581 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:49:21.256591 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:49:21.256621 | orchestrator | changed: [testbed-manager] 2026-02-02 02:49:21.256633 | orchestrator | 2026-02-02 02:49:21.256644 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-02 02:49:21.256655 | orchestrator | Monday 02 February 2026 02:49:13 +0000 (0:00:01.563) 0:00:14.230 ******* 2026-02-02 02:49:21.256665 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:49:21.256719 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:49:21.256731 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:49:21.256742 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:49:21.256753 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:49:21.256764 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:49:21.256774 | orchestrator | ok: [testbed-manager] 2026-02-02 02:49:21.256785 | orchestrator | 2026-02-02 02:49:21.256796 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-02 02:49:21.256807 | orchestrator | Monday 02 February 2026 02:49:15 +0000 (0:00:01.630) 0:00:15.860 ******* 2026-02-02 02:49:21.256818 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:49:21.256828 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:49:21.256839 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:49:21.256850 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:49:21.256861 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:49:21.256871 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:49:21.256882 | orchestrator | changed: [testbed-manager] 2026-02-02 02:49:21.256893 | orchestrator | 2026-02-02 02:49:21.256903 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-02 02:49:21.256914 | orchestrator | Monday 02 February 2026 02:49:17 +0000 (0:00:01.888) 0:00:17.750 ******* 2026-02-02 02:49:21.256925 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:49:21.256936 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:49:21.256946 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:49:21.256957 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:49:21.256968 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:49:21.256981 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:49:21.256999 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:49:21.257026 | orchestrator | 2026-02-02 02:49:21.257045 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-02 02:49:21.257063 | orchestrator | 2026-02-02 02:49:21.257080 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-02 02:49:21.257098 | orchestrator | Monday 02 February 2026 02:49:18 +0000 (0:00:00.633) 0:00:18.383 ******* 2026-02-02 02:49:21.257114 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:49:21.257131 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:49:21.257149 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:49:21.257166 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:49:21.257184 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:49:21.257202 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:49:21.257221 | orchestrator | ok: [testbed-manager] 2026-02-02 02:49:21.257240 | orchestrator | 2026-02-02 02:49:21.257259 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:49:21.257279 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:49:21.257294 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:49:21.257316 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:49:21.257335 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:49:21.257346 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:49:21.257357 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:49:21.257368 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:49:21.257379 | orchestrator | 2026-02-02 02:49:21.257390 | orchestrator | 2026-02-02 02:49:21.257401 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:49:21.257412 | orchestrator | Monday 02 February 2026 02:49:21 +0000 (0:00:03.135) 0:00:21.518 ******* 2026-02-02 02:49:21.257423 | orchestrator | =============================================================================== 2026-02-02 02:49:21.257434 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.51s 2026-02-02 02:49:21.257445 | orchestrator | Install python3-docker -------------------------------------------------- 3.14s 2026-02-02 02:49:21.257456 | orchestrator | Apply netplan configuration --------------------------------------------- 2.65s 2026-02-02 02:49:21.257467 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.89s 2026-02-02 02:49:21.257478 | orchestrator | Apply netplan configuration --------------------------------------------- 1.74s 2026-02-02 02:49:21.257573 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.63s 2026-02-02 02:49:21.257587 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.57s 2026-02-02 02:49:21.257598 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.56s 2026-02-02 02:49:21.257609 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.45s 2026-02-02 02:49:21.257620 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.86s 2026-02-02 02:49:21.257631 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.76s 2026-02-02 02:49:21.257653 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2026-02-02 02:49:22.023849 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-02 02:49:34.326495 | orchestrator | 2026-02-02 02:49:34 | INFO  | Task 1081d177-716a-4aad-9550-d73166d5a42a (reboot) was prepared for execution. 2026-02-02 02:49:34.326555 | orchestrator | 2026-02-02 02:49:34 | INFO  | It takes a moment until task 1081d177-716a-4aad-9550-d73166d5a42a (reboot) has been started and output is visible here. 2026-02-02 02:49:43.908390 | orchestrator | 2026-02-02 02:49:43.908508 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-02 02:49:43.908526 | orchestrator | 2026-02-02 02:49:43.908539 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-02 02:49:43.908551 | orchestrator | Monday 02 February 2026 02:49:38 +0000 (0:00:00.189) 0:00:00.189 ******* 2026-02-02 02:49:43.908562 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:49:43.908574 | orchestrator | 2026-02-02 02:49:43.908585 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-02 02:49:43.908596 | orchestrator | Monday 02 February 2026 02:49:38 +0000 (0:00:00.096) 0:00:00.285 ******* 2026-02-02 02:49:43.908607 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:49:43.908618 | orchestrator | 2026-02-02 02:49:43.908629 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-02 02:49:43.908677 | orchestrator | Monday 02 February 2026 02:49:39 +0000 (0:00:00.877) 0:00:01.163 ******* 2026-02-02 02:49:43.908689 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:49:43.908787 | orchestrator | 2026-02-02 02:49:43.908800 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-02 02:49:43.908811 | orchestrator | 2026-02-02 02:49:43.908822 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-02 02:49:43.908833 | orchestrator | Monday 02 February 2026 02:49:39 +0000 (0:00:00.101) 0:00:01.265 ******* 2026-02-02 02:49:43.908844 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:49:43.908855 | orchestrator | 2026-02-02 02:49:43.908866 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-02 02:49:43.908876 | orchestrator | Monday 02 February 2026 02:49:39 +0000 (0:00:00.095) 0:00:01.360 ******* 2026-02-02 02:49:43.908887 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:49:43.908901 | orchestrator | 2026-02-02 02:49:43.908913 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-02 02:49:43.908926 | orchestrator | Monday 02 February 2026 02:49:40 +0000 (0:00:00.637) 0:00:01.997 ******* 2026-02-02 02:49:43.908938 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:49:43.908951 | orchestrator | 2026-02-02 02:49:43.908964 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-02 02:49:43.908977 | orchestrator | 2026-02-02 02:49:43.908989 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-02 02:49:43.909002 | orchestrator | Monday 02 February 2026 02:49:40 +0000 (0:00:00.092) 0:00:02.089 ******* 2026-02-02 02:49:43.909014 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:49:43.909027 | orchestrator | 2026-02-02 02:49:43.909040 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-02 02:49:43.909054 | orchestrator | Monday 02 February 2026 02:49:40 +0000 (0:00:00.187) 0:00:02.277 ******* 2026-02-02 02:49:43.909066 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:49:43.909079 | orchestrator | 2026-02-02 02:49:43.909111 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-02 02:49:43.909132 | orchestrator | Monday 02 February 2026 02:49:40 +0000 (0:00:00.607) 0:00:02.884 ******* 2026-02-02 02:49:43.909150 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:49:43.909169 | orchestrator | 2026-02-02 02:49:43.909190 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-02 02:49:43.909209 | orchestrator | 2026-02-02 02:49:43.909228 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-02 02:49:43.909247 | orchestrator | Monday 02 February 2026 02:49:41 +0000 (0:00:00.122) 0:00:03.007 ******* 2026-02-02 02:49:43.909267 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:49:43.909287 | orchestrator | 2026-02-02 02:49:43.909307 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-02 02:49:43.909326 | orchestrator | Monday 02 February 2026 02:49:41 +0000 (0:00:00.078) 0:00:03.086 ******* 2026-02-02 02:49:43.909345 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:49:43.909366 | orchestrator | 2026-02-02 02:49:43.909387 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-02 02:49:43.909407 | orchestrator | Monday 02 February 2026 02:49:41 +0000 (0:00:00.617) 0:00:03.704 ******* 2026-02-02 02:49:43.909428 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:49:43.909448 | orchestrator | 2026-02-02 02:49:43.909469 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-02 02:49:43.909491 | orchestrator | 2026-02-02 02:49:43.909511 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-02 02:49:43.909528 | orchestrator | Monday 02 February 2026 02:49:41 +0000 (0:00:00.122) 0:00:03.827 ******* 2026-02-02 02:49:43.909540 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:49:43.909551 | orchestrator | 2026-02-02 02:49:43.909562 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-02 02:49:43.909585 | orchestrator | Monday 02 February 2026 02:49:41 +0000 (0:00:00.105) 0:00:03.932 ******* 2026-02-02 02:49:43.909596 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:49:43.909607 | orchestrator | 2026-02-02 02:49:43.909618 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-02 02:49:43.909629 | orchestrator | Monday 02 February 2026 02:49:42 +0000 (0:00:00.653) 0:00:04.586 ******* 2026-02-02 02:49:43.909639 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:49:43.909650 | orchestrator | 2026-02-02 02:49:43.909661 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-02 02:49:43.909672 | orchestrator | 2026-02-02 02:49:43.909683 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-02 02:49:43.909719 | orchestrator | Monday 02 February 2026 02:49:42 +0000 (0:00:00.111) 0:00:04.698 ******* 2026-02-02 02:49:43.909739 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:49:43.909759 | orchestrator | 2026-02-02 02:49:43.909777 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-02 02:49:43.909795 | orchestrator | Monday 02 February 2026 02:49:42 +0000 (0:00:00.104) 0:00:04.802 ******* 2026-02-02 02:49:43.909823 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:49:43.909835 | orchestrator | 2026-02-02 02:49:43.909846 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-02 02:49:43.909857 | orchestrator | Monday 02 February 2026 02:49:43 +0000 (0:00:00.643) 0:00:05.446 ******* 2026-02-02 02:49:43.909889 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:49:43.909900 | orchestrator | 2026-02-02 02:49:43.909911 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:49:43.909923 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:49:43.909936 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:49:43.909946 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:49:43.909957 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:49:43.909968 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:49:43.909979 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:49:43.909998 | orchestrator | 2026-02-02 02:49:43.910101 | orchestrator | 2026-02-02 02:49:43.910130 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:49:43.910149 | orchestrator | Monday 02 February 2026 02:49:43 +0000 (0:00:00.032) 0:00:05.479 ******* 2026-02-02 02:49:43.910169 | orchestrator | =============================================================================== 2026-02-02 02:49:43.910186 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.04s 2026-02-02 02:49:43.910203 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.67s 2026-02-02 02:49:43.910220 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.58s 2026-02-02 02:49:44.267587 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-02 02:49:56.451772 | orchestrator | 2026-02-02 02:49:56 | INFO  | Task ca415c18-ba74-446a-bc6a-7adbab37b9ce (wait-for-connection) was prepared for execution. 2026-02-02 02:49:56.451868 | orchestrator | 2026-02-02 02:49:56 | INFO  | It takes a moment until task ca415c18-ba74-446a-bc6a-7adbab37b9ce (wait-for-connection) has been started and output is visible here. 2026-02-02 02:50:12.963618 | orchestrator | 2026-02-02 02:50:12.963826 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-02 02:50:12.963847 | orchestrator | 2026-02-02 02:50:12.963859 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-02 02:50:12.963871 | orchestrator | Monday 02 February 2026 02:50:00 +0000 (0:00:00.266) 0:00:00.266 ******* 2026-02-02 02:50:12.963882 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:50:12.963895 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:50:12.963906 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:50:12.963917 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:50:12.963927 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:50:12.963938 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:50:12.963949 | orchestrator | 2026-02-02 02:50:12.963961 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:50:12.963973 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:50:12.963985 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:50:12.963997 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:50:12.964008 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:50:12.964019 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:50:12.964030 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:50:12.964041 | orchestrator | 2026-02-02 02:50:12.964053 | orchestrator | 2026-02-02 02:50:12.964064 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:50:12.964075 | orchestrator | Monday 02 February 2026 02:50:12 +0000 (0:00:11.556) 0:00:11.823 ******* 2026-02-02 02:50:12.964086 | orchestrator | =============================================================================== 2026-02-02 02:50:12.964097 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.56s 2026-02-02 02:50:13.306803 | orchestrator | + osism apply hddtemp 2026-02-02 02:50:25.503045 | orchestrator | 2026-02-02 02:50:25 | INFO  | Task 25af2f1a-08dd-41df-a8f3-e0414ced26e5 (hddtemp) was prepared for execution. 2026-02-02 02:50:25.503124 | orchestrator | 2026-02-02 02:50:25 | INFO  | It takes a moment until task 25af2f1a-08dd-41df-a8f3-e0414ced26e5 (hddtemp) has been started and output is visible here. 2026-02-02 02:50:53.811033 | orchestrator | 2026-02-02 02:50:53.811144 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-02 02:50:53.811161 | orchestrator | 2026-02-02 02:50:53.811174 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-02 02:50:53.811186 | orchestrator | Monday 02 February 2026 02:50:29 +0000 (0:00:00.280) 0:00:00.280 ******* 2026-02-02 02:50:53.811197 | orchestrator | ok: [testbed-manager] 2026-02-02 02:50:53.811209 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:50:53.811220 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:50:53.811231 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:50:53.811242 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:50:53.811253 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:50:53.811264 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:50:53.811274 | orchestrator | 2026-02-02 02:50:53.811285 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-02 02:50:53.811296 | orchestrator | Monday 02 February 2026 02:50:30 +0000 (0:00:00.735) 0:00:01.015 ******* 2026-02-02 02:50:53.811309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 02:50:53.811347 | orchestrator | 2026-02-02 02:50:53.811359 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-02 02:50:53.811369 | orchestrator | Monday 02 February 2026 02:50:31 +0000 (0:00:01.217) 0:00:02.233 ******* 2026-02-02 02:50:53.811380 | orchestrator | ok: [testbed-manager] 2026-02-02 02:50:53.811391 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:50:53.811402 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:50:53.811412 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:50:53.811438 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:50:53.811449 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:50:53.811460 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:50:53.811470 | orchestrator | 2026-02-02 02:50:53.811481 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-02 02:50:53.811492 | orchestrator | Monday 02 February 2026 02:50:33 +0000 (0:00:01.999) 0:00:04.232 ******* 2026-02-02 02:50:53.811503 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:50:53.811515 | orchestrator | changed: [testbed-manager] 2026-02-02 02:50:53.811525 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:50:53.811536 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:50:53.811547 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:50:53.811557 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:50:53.811568 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:50:53.811581 | orchestrator | 2026-02-02 02:50:53.811594 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-02 02:50:53.811608 | orchestrator | Monday 02 February 2026 02:50:35 +0000 (0:00:01.224) 0:00:05.457 ******* 2026-02-02 02:50:53.811620 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:50:53.811633 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:50:53.811645 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:50:53.811658 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:50:53.811671 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:50:53.811698 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:50:53.811736 | orchestrator | ok: [testbed-manager] 2026-02-02 02:50:53.811749 | orchestrator | 2026-02-02 02:50:53.811762 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-02 02:50:53.811775 | orchestrator | Monday 02 February 2026 02:50:36 +0000 (0:00:01.147) 0:00:06.604 ******* 2026-02-02 02:50:53.811788 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:50:53.811801 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:50:53.811814 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:50:53.811826 | orchestrator | changed: [testbed-manager] 2026-02-02 02:50:53.811837 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:50:53.811847 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:50:53.811858 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:50:53.811869 | orchestrator | 2026-02-02 02:50:53.811879 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-02 02:50:53.811890 | orchestrator | Monday 02 February 2026 02:50:37 +0000 (0:00:00.835) 0:00:07.440 ******* 2026-02-02 02:50:53.811901 | orchestrator | changed: [testbed-manager] 2026-02-02 02:50:53.811911 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:50:53.811922 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:50:53.811933 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:50:53.811943 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:50:53.811954 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:50:53.811964 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:50:53.811975 | orchestrator | 2026-02-02 02:50:53.811986 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-02 02:50:53.811997 | orchestrator | Monday 02 February 2026 02:50:49 +0000 (0:00:12.165) 0:00:19.606 ******* 2026-02-02 02:50:53.812008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 02:50:53.812028 | orchestrator | 2026-02-02 02:50:53.812039 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-02 02:50:53.812050 | orchestrator | Monday 02 February 2026 02:50:50 +0000 (0:00:01.278) 0:00:20.884 ******* 2026-02-02 02:50:53.812061 | orchestrator | changed: [testbed-manager] 2026-02-02 02:50:53.812071 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:50:53.812082 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:50:53.812093 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:50:53.812104 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:50:53.812114 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:50:53.812125 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:50:53.812136 | orchestrator | 2026-02-02 02:50:53.812147 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:50:53.812158 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:50:53.812191 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:50:53.812204 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:50:53.812215 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:50:53.812226 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:50:53.812237 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:50:53.812247 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:50:53.812258 | orchestrator | 2026-02-02 02:50:53.812269 | orchestrator | 2026-02-02 02:50:53.812280 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:50:53.812291 | orchestrator | Monday 02 February 2026 02:50:53 +0000 (0:00:02.756) 0:00:23.641 ******* 2026-02-02 02:50:53.812302 | orchestrator | =============================================================================== 2026-02-02 02:50:53.812312 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.17s 2026-02-02 02:50:53.812323 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.76s 2026-02-02 02:50:53.812334 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.00s 2026-02-02 02:50:53.812345 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.28s 2026-02-02 02:50:53.812356 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.22s 2026-02-02 02:50:53.812366 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.22s 2026-02-02 02:50:53.812377 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.15s 2026-02-02 02:50:53.812388 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.84s 2026-02-02 02:50:53.812399 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.74s 2026-02-02 02:50:54.216315 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-02 02:50:54.264386 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-02 02:50:54.264481 | orchestrator | + sudo systemctl restart manager.service 2026-02-02 02:51:11.808780 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-02 02:51:11.808902 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-02 02:51:11.808931 | orchestrator | + local max_attempts=60 2026-02-02 02:51:11.808946 | orchestrator | + local name=ceph-ansible 2026-02-02 02:51:11.808955 | orchestrator | + local attempt_num=1 2026-02-02 02:51:11.808963 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:51:11.837878 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 02:51:11.837989 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 02:51:11.838010 | orchestrator | + sleep 5 2026-02-02 02:51:16.842313 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:51:16.908519 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 02:51:16.908613 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 02:51:16.908626 | orchestrator | + sleep 5 2026-02-02 02:51:21.913696 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:51:21.947357 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 02:51:21.947472 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 02:51:21.947489 | orchestrator | + sleep 5 2026-02-02 02:51:26.953086 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:51:26.990223 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 02:51:26.990316 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 02:51:26.990323 | orchestrator | + sleep 5 2026-02-02 02:51:31.994551 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:51:32.028956 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 02:51:32.029095 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 02:51:32.029111 | orchestrator | + sleep 5 2026-02-02 02:51:37.034249 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:51:37.067974 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 02:51:37.068057 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 02:51:37.068067 | orchestrator | + sleep 5 2026-02-02 02:51:42.073251 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:51:42.117508 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-02 02:51:42.117591 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 02:51:42.117599 | orchestrator | + sleep 5 2026-02-02 02:51:47.123494 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:51:47.175510 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-02 02:51:47.175587 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 02:51:47.175594 | orchestrator | + sleep 5 2026-02-02 02:51:52.176937 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:51:52.203174 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-02 02:51:52.203284 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 02:51:52.203309 | orchestrator | + sleep 5 2026-02-02 02:51:57.206863 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:51:57.249576 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-02 02:51:57.249677 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 02:51:57.249692 | orchestrator | + sleep 5 2026-02-02 02:52:02.253804 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:52:02.295928 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-02 02:52:02.296029 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 02:52:02.296053 | orchestrator | + sleep 5 2026-02-02 02:52:07.299077 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:52:07.336657 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-02 02:52:07.336791 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 02:52:07.336805 | orchestrator | + sleep 5 2026-02-02 02:52:12.341414 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:52:12.380849 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-02 02:52:12.380938 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-02 02:52:12.380952 | orchestrator | + sleep 5 2026-02-02 02:52:17.385030 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-02 02:52:17.419830 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 02:52:17.419908 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-02 02:52:17.419919 | orchestrator | + local max_attempts=60 2026-02-02 02:52:17.419927 | orchestrator | + local name=kolla-ansible 2026-02-02 02:52:17.419935 | orchestrator | + local attempt_num=1 2026-02-02 02:52:17.421342 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-02 02:52:17.447015 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 02:52:17.447090 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-02 02:52:17.447125 | orchestrator | + local max_attempts=60 2026-02-02 02:52:17.447133 | orchestrator | + local name=osism-ansible 2026-02-02 02:52:17.447139 | orchestrator | + local attempt_num=1 2026-02-02 02:52:17.447429 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-02 02:52:17.478809 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 02:52:17.478900 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-02 02:52:17.478913 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-02 02:52:17.654988 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-02 02:52:17.806478 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-02 02:52:17.970411 | orchestrator | ARA in osism-ansible already disabled. 2026-02-02 02:52:18.130840 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-02 02:52:18.131299 | orchestrator | + osism apply gather-facts 2026-02-02 02:52:30.401046 | orchestrator | 2026-02-02 02:52:30 | INFO  | Task 38c31e9a-6ef2-46c0-9fbe-255b70dd3c36 (gather-facts) was prepared for execution. 2026-02-02 02:52:30.401121 | orchestrator | 2026-02-02 02:52:30 | INFO  | It takes a moment until task 38c31e9a-6ef2-46c0-9fbe-255b70dd3c36 (gather-facts) has been started and output is visible here. 2026-02-02 02:52:43.709896 | orchestrator | 2026-02-02 02:52:43.709983 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-02 02:52:43.709992 | orchestrator | 2026-02-02 02:52:43.709998 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 02:52:43.710004 | orchestrator | Monday 02 February 2026 02:52:34 +0000 (0:00:00.221) 0:00:00.221 ******* 2026-02-02 02:52:43.710010 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:52:43.710053 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:52:43.710060 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:52:43.710065 | orchestrator | ok: [testbed-manager] 2026-02-02 02:52:43.710070 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:52:43.710075 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:52:43.710080 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:52:43.710085 | orchestrator | 2026-02-02 02:52:43.710091 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-02 02:52:43.710096 | orchestrator | 2026-02-02 02:52:43.710101 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-02 02:52:43.710107 | orchestrator | Monday 02 February 2026 02:52:42 +0000 (0:00:07.951) 0:00:08.173 ******* 2026-02-02 02:52:43.710112 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:52:43.710118 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:52:43.710123 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:52:43.710128 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:52:43.710133 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:52:43.710138 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:52:43.710143 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:52:43.710148 | orchestrator | 2026-02-02 02:52:43.710153 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:52:43.710159 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:52:43.710165 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:52:43.710170 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:52:43.710175 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:52:43.710180 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:52:43.710185 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:52:43.710208 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 02:52:43.710214 | orchestrator | 2026-02-02 02:52:43.710219 | orchestrator | 2026-02-02 02:52:43.710223 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:52:43.710228 | orchestrator | Monday 02 February 2026 02:52:43 +0000 (0:00:00.538) 0:00:08.711 ******* 2026-02-02 02:52:43.710233 | orchestrator | =============================================================================== 2026-02-02 02:52:43.710238 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.95s 2026-02-02 02:52:43.710243 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-02-02 02:52:44.072236 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-02 02:52:44.083480 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-02 02:52:44.098116 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-02 02:52:44.110263 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-02 02:52:44.122910 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-02 02:52:44.140191 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-02 02:52:44.160277 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-02 02:52:44.176628 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-02 02:52:44.193840 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-02 02:52:44.213811 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-02 02:52:44.229155 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-02 02:52:44.246548 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-02 02:52:44.264339 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-02 02:52:44.283103 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-02 02:52:44.302120 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-02 02:52:44.322047 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-02 02:52:44.341255 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-02 02:52:44.352287 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-02 02:52:44.363613 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-02 02:52:44.376197 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-02 02:52:44.392567 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-02 02:52:44.403514 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-02 02:52:44.421930 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-02 02:52:44.442326 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-02 02:52:44.546931 | orchestrator | ok: Runtime: 0:24:39.839282 2026-02-02 02:52:44.635238 | 2026-02-02 02:52:44.635370 | TASK [Deploy services] 2026-02-02 02:52:45.377579 | orchestrator | 2026-02-02 02:52:45.377807 | orchestrator | # DEPLOY SERVICES 2026-02-02 02:52:45.377838 | orchestrator | 2026-02-02 02:52:45.377854 | orchestrator | + set -e 2026-02-02 02:52:45.377867 | orchestrator | + echo 2026-02-02 02:52:45.377881 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-02 02:52:45.377895 | orchestrator | + echo 2026-02-02 02:52:45.377939 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 02:52:45.377963 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 02:52:45.377978 | orchestrator | ++ INTERACTIVE=false 2026-02-02 02:52:45.377991 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 02:52:45.378012 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 02:52:45.378079 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 02:52:45.378096 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 02:52:45.378108 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 02:52:45.378126 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 02:52:45.378137 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 02:52:45.378152 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 02:52:45.378165 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 02:52:45.378180 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 02:52:45.378191 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 02:52:45.378202 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-02 02:52:45.378218 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-02 02:52:45.378230 | orchestrator | ++ export ARA=false 2026-02-02 02:52:45.378241 | orchestrator | ++ ARA=false 2026-02-02 02:52:45.378253 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 02:52:45.378282 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 02:52:45.378303 | orchestrator | ++ export TEMPEST=false 2026-02-02 02:52:45.378316 | orchestrator | ++ TEMPEST=false 2026-02-02 02:52:45.378327 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 02:52:45.378338 | orchestrator | ++ IS_ZUUL=true 2026-02-02 02:52:45.378349 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 02:52:45.378360 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 02:52:45.378372 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 02:52:45.378383 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 02:52:45.378402 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 02:52:45.378420 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 02:52:45.378452 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 02:52:45.378470 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 02:52:45.378488 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 02:52:45.378516 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 02:52:45.378535 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-02 02:52:45.384655 | orchestrator | + set -e 2026-02-02 02:52:45.384759 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 02:52:45.384777 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 02:52:45.384789 | orchestrator | ++ INTERACTIVE=false 2026-02-02 02:52:45.384800 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 02:52:45.384810 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 02:52:45.384819 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 02:52:45.384829 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 02:52:45.384839 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 02:52:45.384849 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 02:52:45.384859 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 02:52:45.384869 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 02:52:45.384878 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 02:52:45.384888 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 02:52:45.384898 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 02:52:45.384908 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-02 02:52:45.384918 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-02 02:52:45.384928 | orchestrator | ++ export ARA=false 2026-02-02 02:52:45.384938 | orchestrator | ++ ARA=false 2026-02-02 02:52:45.384948 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 02:52:45.384957 | orchestrator | 2026-02-02 02:52:45.384967 | orchestrator | # PULL IMAGES 2026-02-02 02:52:45.384981 | orchestrator | 2026-02-02 02:52:45.384990 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 02:52:45.385000 | orchestrator | ++ export TEMPEST=false 2026-02-02 02:52:45.385010 | orchestrator | ++ TEMPEST=false 2026-02-02 02:52:45.385020 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 02:52:45.385029 | orchestrator | ++ IS_ZUUL=true 2026-02-02 02:52:45.385039 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 02:52:45.385049 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 02:52:45.385058 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 02:52:45.385068 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 02:52:45.385077 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 02:52:45.385087 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 02:52:45.385122 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 02:52:45.385132 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 02:52:45.385142 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 02:52:45.385152 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 02:52:45.385162 | orchestrator | + echo 2026-02-02 02:52:45.385172 | orchestrator | + echo '# PULL IMAGES' 2026-02-02 02:52:45.385182 | orchestrator | + echo 2026-02-02 02:52:45.385914 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-02 02:52:45.442122 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-02 02:52:45.442197 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-02 02:52:47.433561 | orchestrator | 2026-02-02 02:52:47 | INFO  | Trying to run play pull-images in environment custom 2026-02-02 02:52:57.596110 | orchestrator | 2026-02-02 02:52:57 | INFO  | Task cc8bcc7e-a993-43ed-8fda-5646b60a03c2 (pull-images) was prepared for execution. 2026-02-02 02:52:57.596235 | orchestrator | 2026-02-02 02:52:57 | INFO  | Task cc8bcc7e-a993-43ed-8fda-5646b60a03c2 is running in background. No more output. Check ARA for logs. 2026-02-02 02:52:57.947573 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-02 02:53:10.162949 | orchestrator | 2026-02-02 02:53:10 | INFO  | Task c957c1fe-9774-4e70-be1f-19f3d9b06f36 (cgit) was prepared for execution. 2026-02-02 02:53:10.163057 | orchestrator | 2026-02-02 02:53:10 | INFO  | Task c957c1fe-9774-4e70-be1f-19f3d9b06f36 is running in background. No more output. Check ARA for logs. 2026-02-02 02:53:23.584095 | orchestrator | 2026-02-02 02:53:23 | INFO  | Task d8845dc0-6f4f-480b-b6cc-d336416dbefe (dotfiles) was prepared for execution. 2026-02-02 02:53:23.584220 | orchestrator | 2026-02-02 02:53:23 | INFO  | Task d8845dc0-6f4f-480b-b6cc-d336416dbefe is running in background. No more output. Check ARA for logs. 2026-02-02 02:53:36.961377 | orchestrator | 2026-02-02 02:53:36 | INFO  | Task 466b7211-1033-4485-ae69-4b47569430ca (homer) was prepared for execution. 2026-02-02 02:53:36.961470 | orchestrator | 2026-02-02 02:53:36 | INFO  | Task 466b7211-1033-4485-ae69-4b47569430ca is running in background. No more output. Check ARA for logs. 2026-02-02 02:53:49.561456 | orchestrator | 2026-02-02 02:53:49 | INFO  | Task 973dcc78-b2fa-42af-825b-eb4ea879f555 (phpmyadmin) was prepared for execution. 2026-02-02 02:53:49.561555 | orchestrator | 2026-02-02 02:53:49 | INFO  | Task 973dcc78-b2fa-42af-825b-eb4ea879f555 is running in background. No more output. Check ARA for logs. 2026-02-02 02:54:02.241786 | orchestrator | 2026-02-02 02:54:02 | INFO  | Task f7858edc-38c7-4ff9-a57c-dee976088683 (sosreport) was prepared for execution. 2026-02-02 02:54:02.241880 | orchestrator | 2026-02-02 02:54:02 | INFO  | Task f7858edc-38c7-4ff9-a57c-dee976088683 is running in background. No more output. Check ARA for logs. 2026-02-02 02:54:02.600479 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-02 02:54:02.608342 | orchestrator | + set -e 2026-02-02 02:54:02.608473 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 02:54:02.608577 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 02:54:02.608603 | orchestrator | ++ INTERACTIVE=false 2026-02-02 02:54:02.608625 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 02:54:02.608644 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 02:54:02.608664 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 02:54:02.608676 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 02:54:02.608687 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 02:54:02.608698 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 02:54:02.608709 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 02:54:02.608720 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 02:54:02.608732 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 02:54:02.608743 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 02:54:02.608792 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 02:54:02.608811 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-02 02:54:02.608828 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-02 02:54:02.608846 | orchestrator | ++ export ARA=false 2026-02-02 02:54:02.608866 | orchestrator | ++ ARA=false 2026-02-02 02:54:02.608879 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 02:54:02.608987 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 02:54:02.609002 | orchestrator | ++ export TEMPEST=false 2026-02-02 02:54:02.609016 | orchestrator | ++ TEMPEST=false 2026-02-02 02:54:02.609027 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 02:54:02.609038 | orchestrator | ++ IS_ZUUL=true 2026-02-02 02:54:02.609066 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 02:54:02.609083 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 02:54:02.609095 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 02:54:02.609106 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 02:54:02.609117 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 02:54:02.609128 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 02:54:02.609139 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 02:54:02.609150 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 02:54:02.609164 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 02:54:02.609182 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 02:54:02.609422 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-02 02:54:02.669607 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-02 02:54:02.669706 | orchestrator | + osism apply frr 2026-02-02 02:54:15.060568 | orchestrator | 2026-02-02 02:54:15 | INFO  | Task e1cc68b5-e274-4d16-9805-0a7150e55bda (frr) was prepared for execution. 2026-02-02 02:54:15.060648 | orchestrator | 2026-02-02 02:54:15 | INFO  | It takes a moment until task e1cc68b5-e274-4d16-9805-0a7150e55bda (frr) has been started and output is visible here. 2026-02-02 02:54:53.396172 | orchestrator | 2026-02-02 02:54:53.396278 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-02 02:54:53.396292 | orchestrator | 2026-02-02 02:54:53.396302 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-02 02:54:53.396319 | orchestrator | Monday 02 February 2026 02:54:23 +0000 (0:00:00.432) 0:00:00.432 ******* 2026-02-02 02:54:53.396329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 02:54:53.396340 | orchestrator | 2026-02-02 02:54:53.396349 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-02 02:54:53.396358 | orchestrator | Monday 02 February 2026 02:54:23 +0000 (0:00:00.277) 0:00:00.710 ******* 2026-02-02 02:54:53.396367 | orchestrator | changed: [testbed-manager] 2026-02-02 02:54:53.396377 | orchestrator | 2026-02-02 02:54:53.396385 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-02 02:54:53.396397 | orchestrator | Monday 02 February 2026 02:54:25 +0000 (0:00:02.067) 0:00:02.777 ******* 2026-02-02 02:54:53.396405 | orchestrator | changed: [testbed-manager] 2026-02-02 02:54:53.396414 | orchestrator | 2026-02-02 02:54:53.396423 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-02 02:54:53.396432 | orchestrator | Monday 02 February 2026 02:54:40 +0000 (0:00:15.274) 0:00:18.052 ******* 2026-02-02 02:54:53.396441 | orchestrator | ok: [testbed-manager] 2026-02-02 02:54:53.396450 | orchestrator | 2026-02-02 02:54:53.396459 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-02 02:54:53.396468 | orchestrator | Monday 02 February 2026 02:54:42 +0000 (0:00:01.503) 0:00:19.555 ******* 2026-02-02 02:54:53.396476 | orchestrator | changed: [testbed-manager] 2026-02-02 02:54:53.396485 | orchestrator | 2026-02-02 02:54:53.396494 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-02 02:54:53.396502 | orchestrator | Monday 02 February 2026 02:54:44 +0000 (0:00:01.856) 0:00:21.412 ******* 2026-02-02 02:54:53.396511 | orchestrator | ok: [testbed-manager] 2026-02-02 02:54:53.396520 | orchestrator | 2026-02-02 02:54:53.396528 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-02 02:54:53.396538 | orchestrator | Monday 02 February 2026 02:54:45 +0000 (0:00:01.593) 0:00:23.005 ******* 2026-02-02 02:54:53.396547 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:54:53.396556 | orchestrator | 2026-02-02 02:54:53.396564 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-02 02:54:53.396573 | orchestrator | Monday 02 February 2026 02:54:45 +0000 (0:00:00.150) 0:00:23.156 ******* 2026-02-02 02:54:53.396602 | orchestrator | skipping: [testbed-manager] 2026-02-02 02:54:53.396614 | orchestrator | 2026-02-02 02:54:53.396624 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-02 02:54:53.396634 | orchestrator | Monday 02 February 2026 02:54:46 +0000 (0:00:00.265) 0:00:23.421 ******* 2026-02-02 02:54:53.396644 | orchestrator | changed: [testbed-manager] 2026-02-02 02:54:53.396655 | orchestrator | 2026-02-02 02:54:53.396664 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-02 02:54:53.396675 | orchestrator | Monday 02 February 2026 02:54:47 +0000 (0:00:01.090) 0:00:24.512 ******* 2026-02-02 02:54:53.396685 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-02 02:54:53.396695 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-02 02:54:53.396706 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-02 02:54:53.396717 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-02 02:54:53.396727 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-02 02:54:53.396738 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-02 02:54:53.396748 | orchestrator | 2026-02-02 02:54:53.396797 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-02 02:54:53.396807 | orchestrator | Monday 02 February 2026 02:54:49 +0000 (0:00:02.431) 0:00:26.943 ******* 2026-02-02 02:54:53.396817 | orchestrator | ok: [testbed-manager] 2026-02-02 02:54:53.396827 | orchestrator | 2026-02-02 02:54:53.396837 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-02 02:54:53.396847 | orchestrator | Monday 02 February 2026 02:54:51 +0000 (0:00:01.818) 0:00:28.762 ******* 2026-02-02 02:54:53.396857 | orchestrator | changed: [testbed-manager] 2026-02-02 02:54:53.396867 | orchestrator | 2026-02-02 02:54:53.396877 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:54:53.396887 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 02:54:53.396898 | orchestrator | 2026-02-02 02:54:53.396909 | orchestrator | 2026-02-02 02:54:53.396926 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:54:53.396936 | orchestrator | Monday 02 February 2026 02:54:52 +0000 (0:00:01.488) 0:00:30.251 ******* 2026-02-02 02:54:53.396946 | orchestrator | =============================================================================== 2026-02-02 02:54:53.396957 | orchestrator | osism.services.frr : Install frr package ------------------------------- 15.27s 2026-02-02 02:54:53.396967 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.43s 2026-02-02 02:54:53.396977 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.07s 2026-02-02 02:54:53.396988 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.86s 2026-02-02 02:54:53.396999 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.82s 2026-02-02 02:54:53.397023 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.59s 2026-02-02 02:54:53.397034 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.50s 2026-02-02 02:54:53.397043 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.49s 2026-02-02 02:54:53.397051 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.09s 2026-02-02 02:54:53.397060 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.28s 2026-02-02 02:54:53.397068 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.27s 2026-02-02 02:54:53.397077 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-02-02 02:54:54.027301 | orchestrator | + osism apply kubernetes 2026-02-02 02:54:56.231168 | orchestrator | 2026-02-02 02:54:56 | INFO  | Task 4692e670-ac70-40ab-9fe0-9767884a640f (kubernetes) was prepared for execution. 2026-02-02 02:54:56.231250 | orchestrator | 2026-02-02 02:54:56 | INFO  | It takes a moment until task 4692e670-ac70-40ab-9fe0-9767884a640f (kubernetes) has been started and output is visible here. 2026-02-02 02:55:21.168711 | orchestrator | 2026-02-02 02:55:21.168855 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-02 02:55:21.168874 | orchestrator | 2026-02-02 02:55:21.168883 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-02 02:55:21.168894 | orchestrator | Monday 02 February 2026 02:55:01 +0000 (0:00:00.178) 0:00:00.178 ******* 2026-02-02 02:55:21.168933 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:55:21.168943 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:55:21.168953 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:55:21.168961 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:55:21.168969 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:55:21.168977 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:55:21.168986 | orchestrator | 2026-02-02 02:55:21.168994 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-02 02:55:21.169002 | orchestrator | Monday 02 February 2026 02:55:01 +0000 (0:00:00.757) 0:00:00.936 ******* 2026-02-02 02:55:21.169010 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:55:21.169019 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:55:21.169027 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:55:21.169035 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:55:21.169043 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:55:21.169051 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:55:21.169059 | orchestrator | 2026-02-02 02:55:21.169067 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-02 02:55:21.169076 | orchestrator | Monday 02 February 2026 02:55:02 +0000 (0:00:00.766) 0:00:01.702 ******* 2026-02-02 02:55:21.169084 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:55:21.169091 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:55:21.169098 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:55:21.169106 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:55:21.169113 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:55:21.169121 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:55:21.169129 | orchestrator | 2026-02-02 02:55:21.169137 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-02 02:55:21.169145 | orchestrator | Monday 02 February 2026 02:55:03 +0000 (0:00:00.832) 0:00:02.535 ******* 2026-02-02 02:55:21.169154 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:55:21.169162 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:55:21.169170 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:55:21.169180 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:55:21.169188 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:55:21.169196 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:55:21.169204 | orchestrator | 2026-02-02 02:55:21.169212 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-02 02:55:21.169221 | orchestrator | Monday 02 February 2026 02:55:05 +0000 (0:00:01.886) 0:00:04.421 ******* 2026-02-02 02:55:21.169229 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:55:21.169237 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:55:21.169245 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:55:21.169254 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:55:21.169261 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:55:21.169270 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:55:21.169279 | orchestrator | 2026-02-02 02:55:21.169287 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-02 02:55:21.169295 | orchestrator | Monday 02 February 2026 02:55:06 +0000 (0:00:01.325) 0:00:05.746 ******* 2026-02-02 02:55:21.169304 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:55:21.169334 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:55:21.169343 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:55:21.169351 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:55:21.169359 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:55:21.169368 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:55:21.169376 | orchestrator | 2026-02-02 02:55:21.169393 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-02 02:55:21.169403 | orchestrator | Monday 02 February 2026 02:55:07 +0000 (0:00:00.946) 0:00:06.693 ******* 2026-02-02 02:55:21.169411 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:55:21.169419 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:55:21.169427 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:55:21.169436 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:55:21.169444 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:55:21.169452 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:55:21.169460 | orchestrator | 2026-02-02 02:55:21.169468 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-02 02:55:21.169476 | orchestrator | Monday 02 February 2026 02:55:08 +0000 (0:00:00.833) 0:00:07.526 ******* 2026-02-02 02:55:21.169483 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:55:21.169491 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:55:21.169498 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:55:21.169507 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:55:21.169514 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:55:21.169522 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:55:21.169530 | orchestrator | 2026-02-02 02:55:21.169538 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-02 02:55:21.169547 | orchestrator | Monday 02 February 2026 02:55:09 +0000 (0:00:00.810) 0:00:08.336 ******* 2026-02-02 02:55:21.169556 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 02:55:21.169564 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 02:55:21.169572 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:55:21.169580 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 02:55:21.169588 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 02:55:21.169597 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:55:21.169606 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 02:55:21.169612 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 02:55:21.169619 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:55:21.169627 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 02:55:21.169653 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 02:55:21.169662 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:55:21.169671 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 02:55:21.169679 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 02:55:21.169687 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:55:21.169695 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 02:55:21.169703 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 02:55:21.169711 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:55:21.169719 | orchestrator | 2026-02-02 02:55:21.169727 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-02 02:55:21.169735 | orchestrator | Monday 02 February 2026 02:55:09 +0000 (0:00:00.718) 0:00:09.055 ******* 2026-02-02 02:55:21.169743 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:55:21.169751 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:55:21.169782 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:55:21.169800 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:55:21.169808 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:55:21.169816 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:55:21.169824 | orchestrator | 2026-02-02 02:55:21.169832 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-02 02:55:21.169842 | orchestrator | Monday 02 February 2026 02:55:11 +0000 (0:00:01.183) 0:00:10.238 ******* 2026-02-02 02:55:21.169850 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:55:21.169859 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:55:21.169867 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:55:21.169874 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:55:21.169882 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:55:21.169890 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:55:21.169898 | orchestrator | 2026-02-02 02:55:21.169906 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-02 02:55:21.169914 | orchestrator | Monday 02 February 2026 02:55:11 +0000 (0:00:00.795) 0:00:11.034 ******* 2026-02-02 02:55:21.169922 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:55:21.169929 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:55:21.169938 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:55:21.169946 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:55:21.169954 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:55:21.169962 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:55:21.169970 | orchestrator | 2026-02-02 02:55:21.169978 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-02 02:55:21.169986 | orchestrator | Monday 02 February 2026 02:55:17 +0000 (0:00:05.277) 0:00:16.311 ******* 2026-02-02 02:55:21.169994 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:55:21.170054 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:55:21.170066 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:55:21.170074 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:55:21.170082 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:55:21.170090 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:55:21.170123 | orchestrator | 2026-02-02 02:55:21.170132 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-02 02:55:21.170140 | orchestrator | Monday 02 February 2026 02:55:18 +0000 (0:00:00.936) 0:00:17.248 ******* 2026-02-02 02:55:21.170148 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:55:21.170156 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:55:21.170164 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:55:21.170172 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:55:21.170180 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:55:21.170188 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:55:21.170196 | orchestrator | 2026-02-02 02:55:21.170205 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-02 02:55:21.170214 | orchestrator | Monday 02 February 2026 02:55:19 +0000 (0:00:01.395) 0:00:18.643 ******* 2026-02-02 02:55:21.170222 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:55:21.170231 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:55:21.170238 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:55:21.170246 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:55:21.170254 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:55:21.170262 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:55:21.170270 | orchestrator | 2026-02-02 02:55:21.170279 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-02 02:55:21.170287 | orchestrator | Monday 02 February 2026 02:55:20 +0000 (0:00:00.640) 0:00:19.284 ******* 2026-02-02 02:55:21.170296 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-02 02:55:21.170309 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-02 02:55:21.170318 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-02 02:55:21.170326 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-02 02:55:21.170342 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:55:21.170350 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-02 02:55:21.170358 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-02 02:55:21.170366 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:55:21.170375 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:55:21.170383 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-02 02:55:21.170391 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-02 02:55:21.170398 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:55:21.170406 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-02 02:55:21.170414 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-02 02:55:21.170422 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:55:21.170430 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-02 02:55:21.170437 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-02 02:55:21.170445 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:55:21.170453 | orchestrator | 2026-02-02 02:55:21.170461 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-02 02:55:21.170478 | orchestrator | Monday 02 February 2026 02:55:21 +0000 (0:00:00.933) 0:00:20.217 ******* 2026-02-02 02:56:35.502430 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:56:35.502583 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:56:35.502606 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:56:35.502624 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:56:35.502678 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:56:35.502690 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:56:35.502700 | orchestrator | 2026-02-02 02:56:35.502711 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-02 02:56:35.502723 | orchestrator | Monday 02 February 2026 02:55:21 +0000 (0:00:00.609) 0:00:20.827 ******* 2026-02-02 02:56:35.502769 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:56:35.502786 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:56:35.502800 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:56:35.502815 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:56:35.502830 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:56:35.502844 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:56:35.502859 | orchestrator | 2026-02-02 02:56:35.502873 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-02 02:56:35.502888 | orchestrator | 2026-02-02 02:56:35.502902 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-02 02:56:35.502918 | orchestrator | Monday 02 February 2026 02:55:22 +0000 (0:00:01.185) 0:00:22.013 ******* 2026-02-02 02:56:35.502934 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:56:35.502951 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:56:35.502966 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:56:35.502980 | orchestrator | 2026-02-02 02:56:35.502991 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-02 02:56:35.503002 | orchestrator | Monday 02 February 2026 02:55:24 +0000 (0:00:01.185) 0:00:23.198 ******* 2026-02-02 02:56:35.503013 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:56:35.503024 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:56:35.503034 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:56:35.503044 | orchestrator | 2026-02-02 02:56:35.503055 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-02 02:56:35.503065 | orchestrator | Monday 02 February 2026 02:55:25 +0000 (0:00:01.561) 0:00:24.760 ******* 2026-02-02 02:56:35.503077 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:56:35.503087 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:56:35.503097 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:56:35.503108 | orchestrator | 2026-02-02 02:56:35.503118 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-02 02:56:35.503154 | orchestrator | Monday 02 February 2026 02:55:26 +0000 (0:00:00.852) 0:00:25.612 ******* 2026-02-02 02:56:35.503165 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:56:35.503175 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:56:35.503185 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:56:35.503200 | orchestrator | 2026-02-02 02:56:35.503215 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-02 02:56:35.503229 | orchestrator | Monday 02 February 2026 02:55:27 +0000 (0:00:00.609) 0:00:26.221 ******* 2026-02-02 02:56:35.503243 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:56:35.503258 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:56:35.503272 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:56:35.503288 | orchestrator | 2026-02-02 02:56:35.503303 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-02 02:56:35.503338 | orchestrator | Monday 02 February 2026 02:55:27 +0000 (0:00:00.334) 0:00:26.556 ******* 2026-02-02 02:56:35.503349 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:56:35.503357 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:56:35.503366 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:56:35.503375 | orchestrator | 2026-02-02 02:56:35.503384 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-02 02:56:35.503393 | orchestrator | Monday 02 February 2026 02:55:28 +0000 (0:00:00.981) 0:00:27.537 ******* 2026-02-02 02:56:35.503402 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:56:35.503411 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:56:35.503420 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:56:35.503429 | orchestrator | 2026-02-02 02:56:35.503438 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-02 02:56:35.503447 | orchestrator | Monday 02 February 2026 02:55:29 +0000 (0:00:01.513) 0:00:29.051 ******* 2026-02-02 02:56:35.503456 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:56:35.503465 | orchestrator | 2026-02-02 02:56:35.503473 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-02 02:56:35.503482 | orchestrator | Monday 02 February 2026 02:55:30 +0000 (0:00:00.517) 0:00:29.569 ******* 2026-02-02 02:56:35.503491 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:56:35.503500 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:56:35.503509 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:56:35.503518 | orchestrator | 2026-02-02 02:56:35.503526 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-02 02:56:35.503535 | orchestrator | Monday 02 February 2026 02:55:32 +0000 (0:00:01.974) 0:00:31.544 ******* 2026-02-02 02:56:35.503544 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:56:35.503553 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:56:35.503568 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:56:35.503582 | orchestrator | 2026-02-02 02:56:35.503597 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-02 02:56:35.503612 | orchestrator | Monday 02 February 2026 02:55:33 +0000 (0:00:00.541) 0:00:32.085 ******* 2026-02-02 02:56:35.503626 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:56:35.503641 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:56:35.503652 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:56:35.503661 | orchestrator | 2026-02-02 02:56:35.503669 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-02 02:56:35.503678 | orchestrator | Monday 02 February 2026 02:55:34 +0000 (0:00:01.302) 0:00:33.388 ******* 2026-02-02 02:56:35.503687 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:56:35.503695 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:56:35.503704 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:56:35.503713 | orchestrator | 2026-02-02 02:56:35.503722 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-02 02:56:35.503777 | orchestrator | Monday 02 February 2026 02:55:35 +0000 (0:00:01.169) 0:00:34.557 ******* 2026-02-02 02:56:35.503788 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:56:35.503806 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:56:35.503815 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:56:35.503824 | orchestrator | 2026-02-02 02:56:35.503833 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-02 02:56:35.503841 | orchestrator | Monday 02 February 2026 02:55:35 +0000 (0:00:00.289) 0:00:34.846 ******* 2026-02-02 02:56:35.503850 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:56:35.503859 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:56:35.503868 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:56:35.503876 | orchestrator | 2026-02-02 02:56:35.503885 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-02 02:56:35.503894 | orchestrator | Monday 02 February 2026 02:55:36 +0000 (0:00:00.564) 0:00:35.410 ******* 2026-02-02 02:56:35.503902 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:56:35.503915 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:56:35.503930 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:56:35.503944 | orchestrator | 2026-02-02 02:56:35.503967 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-02 02:56:35.503982 | orchestrator | Monday 02 February 2026 02:55:37 +0000 (0:00:01.172) 0:00:36.583 ******* 2026-02-02 02:56:35.503998 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:56:35.504012 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:56:35.504028 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:56:35.504041 | orchestrator | 2026-02-02 02:56:35.504056 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-02 02:56:35.504072 | orchestrator | Monday 02 February 2026 02:55:40 +0000 (0:00:02.613) 0:00:39.196 ******* 2026-02-02 02:56:35.504086 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:56:35.504101 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:56:35.504110 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:56:35.504123 | orchestrator | 2026-02-02 02:56:35.504132 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-02 02:56:35.504141 | orchestrator | Monday 02 February 2026 02:55:40 +0000 (0:00:00.418) 0:00:39.615 ******* 2026-02-02 02:56:35.504150 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-02 02:56:35.504161 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-02 02:56:35.504169 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-02 02:56:35.504178 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-02 02:56:35.504187 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-02 02:56:35.504195 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-02 02:56:35.504204 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-02 02:56:35.504213 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-02 02:56:35.504222 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-02 02:56:35.504230 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-02 02:56:35.504239 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-02 02:56:35.504256 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-02 02:56:35.504265 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-02 02:56:35.504281 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-02 02:56:35.504295 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-02 02:56:35.504309 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:56:35.504324 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:56:35.504338 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:56:35.504354 | orchestrator | 2026-02-02 02:56:35.504376 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-02 02:56:35.504391 | orchestrator | Monday 02 February 2026 02:56:34 +0000 (0:00:53.639) 0:01:33.255 ******* 2026-02-02 02:56:35.504401 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:56:35.504410 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:56:35.504419 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:56:35.504427 | orchestrator | 2026-02-02 02:56:35.504436 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-02 02:56:35.504445 | orchestrator | Monday 02 February 2026 02:56:34 +0000 (0:00:00.294) 0:01:33.550 ******* 2026-02-02 02:56:35.504462 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:57:17.589691 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:57:17.589808 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:57:17.589834 | orchestrator | 2026-02-02 02:57:17.589854 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-02 02:57:17.589874 | orchestrator | Monday 02 February 2026 02:56:35 +0000 (0:00:01.006) 0:01:34.556 ******* 2026-02-02 02:57:17.589888 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:57:17.589900 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:57:17.589910 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:57:17.589921 | orchestrator | 2026-02-02 02:57:17.589932 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-02 02:57:17.589944 | orchestrator | Monday 02 February 2026 02:56:36 +0000 (0:00:01.229) 0:01:35.786 ******* 2026-02-02 02:57:17.589955 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:57:17.589966 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:57:17.589976 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:57:17.589990 | orchestrator | 2026-02-02 02:57:17.590008 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-02 02:57:17.590114 | orchestrator | Monday 02 February 2026 02:57:03 +0000 (0:00:26.723) 0:02:02.510 ******* 2026-02-02 02:57:17.590126 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:57:17.590138 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:57:17.590149 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:57:17.590160 | orchestrator | 2026-02-02 02:57:17.590171 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-02 02:57:17.590184 | orchestrator | Monday 02 February 2026 02:57:04 +0000 (0:00:00.615) 0:02:03.125 ******* 2026-02-02 02:57:17.590204 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:57:17.590223 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:57:17.590236 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:57:17.590247 | orchestrator | 2026-02-02 02:57:17.590258 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-02 02:57:17.590269 | orchestrator | Monday 02 February 2026 02:57:04 +0000 (0:00:00.605) 0:02:03.731 ******* 2026-02-02 02:57:17.590280 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:57:17.590292 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:57:17.590302 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:57:17.590313 | orchestrator | 2026-02-02 02:57:17.590324 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-02 02:57:17.590444 | orchestrator | Monday 02 February 2026 02:57:05 +0000 (0:00:00.609) 0:02:04.341 ******* 2026-02-02 02:57:17.590469 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:57:17.590488 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:57:17.590506 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:57:17.590525 | orchestrator | 2026-02-02 02:57:17.590543 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-02 02:57:17.590603 | orchestrator | Monday 02 February 2026 02:57:06 +0000 (0:00:00.789) 0:02:05.131 ******* 2026-02-02 02:57:17.590617 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:57:17.590628 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:57:17.590639 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:57:17.590650 | orchestrator | 2026-02-02 02:57:17.590661 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-02 02:57:17.590672 | orchestrator | Monday 02 February 2026 02:57:06 +0000 (0:00:00.305) 0:02:05.436 ******* 2026-02-02 02:57:17.590683 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:57:17.590694 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:57:17.590704 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:57:17.590715 | orchestrator | 2026-02-02 02:57:17.590726 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-02 02:57:17.590738 | orchestrator | Monday 02 February 2026 02:57:06 +0000 (0:00:00.628) 0:02:06.064 ******* 2026-02-02 02:57:17.590749 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:57:17.590760 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:57:17.590771 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:57:17.590782 | orchestrator | 2026-02-02 02:57:17.590793 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-02 02:57:17.590804 | orchestrator | Monday 02 February 2026 02:57:07 +0000 (0:00:00.599) 0:02:06.664 ******* 2026-02-02 02:57:17.590815 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:57:17.590826 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:57:17.590837 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:57:17.590848 | orchestrator | 2026-02-02 02:57:17.590859 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-02 02:57:17.590871 | orchestrator | Monday 02 February 2026 02:57:08 +0000 (0:00:00.882) 0:02:07.547 ******* 2026-02-02 02:57:17.590885 | orchestrator | changed: [testbed-node-0] 2026-02-02 02:57:17.590896 | orchestrator | changed: [testbed-node-1] 2026-02-02 02:57:17.590907 | orchestrator | changed: [testbed-node-2] 2026-02-02 02:57:17.590918 | orchestrator | 2026-02-02 02:57:17.590929 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-02 02:57:17.590940 | orchestrator | Monday 02 February 2026 02:57:09 +0000 (0:00:01.032) 0:02:08.580 ******* 2026-02-02 02:57:17.590951 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:57:17.590962 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:57:17.590973 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:57:17.590984 | orchestrator | 2026-02-02 02:57:17.590995 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-02 02:57:17.591006 | orchestrator | Monday 02 February 2026 02:57:09 +0000 (0:00:00.324) 0:02:08.905 ******* 2026-02-02 02:57:17.591017 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:57:17.591028 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:57:17.591038 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:57:17.591049 | orchestrator | 2026-02-02 02:57:17.591060 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-02 02:57:17.591071 | orchestrator | Monday 02 February 2026 02:57:10 +0000 (0:00:00.279) 0:02:09.184 ******* 2026-02-02 02:57:17.591082 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:57:17.591093 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:57:17.591104 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:57:17.591115 | orchestrator | 2026-02-02 02:57:17.591125 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-02 02:57:17.591136 | orchestrator | Monday 02 February 2026 02:57:10 +0000 (0:00:00.611) 0:02:09.795 ******* 2026-02-02 02:57:17.591157 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:57:17.591187 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:57:17.591229 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:57:17.591241 | orchestrator | 2026-02-02 02:57:17.591253 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-02 02:57:17.591265 | orchestrator | Monday 02 February 2026 02:57:11 +0000 (0:00:00.883) 0:02:10.679 ******* 2026-02-02 02:57:17.591276 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-02 02:57:17.591288 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-02 02:57:17.591305 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-02 02:57:17.591325 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-02 02:57:17.591340 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-02 02:57:17.591351 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-02 02:57:17.591365 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-02 02:57:17.591386 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-02 02:57:17.591405 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-02 02:57:17.591425 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-02 02:57:17.591438 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-02 02:57:17.591449 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-02 02:57:17.591460 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-02 02:57:17.591471 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-02 02:57:17.591482 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-02 02:57:17.591492 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-02 02:57:17.591503 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-02 02:57:17.591514 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-02 02:57:17.591525 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-02 02:57:17.591535 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-02 02:57:17.591546 | orchestrator | 2026-02-02 02:57:17.591726 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-02 02:57:17.591762 | orchestrator | 2026-02-02 02:57:17.591774 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-02 02:57:17.591785 | orchestrator | Monday 02 February 2026 02:57:14 +0000 (0:00:02.950) 0:02:13.629 ******* 2026-02-02 02:57:17.591795 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:57:17.591806 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:57:17.591817 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:57:17.591828 | orchestrator | 2026-02-02 02:57:17.591852 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-02 02:57:17.591882 | orchestrator | Monday 02 February 2026 02:57:14 +0000 (0:00:00.311) 0:02:13.941 ******* 2026-02-02 02:57:17.591893 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:57:17.591915 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:57:17.591925 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:57:17.591947 | orchestrator | 2026-02-02 02:57:17.591958 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-02 02:57:17.591969 | orchestrator | Monday 02 February 2026 02:57:15 +0000 (0:00:00.786) 0:02:14.728 ******* 2026-02-02 02:57:17.591980 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:57:17.591991 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:57:17.592001 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:57:17.592012 | orchestrator | 2026-02-02 02:57:17.592023 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-02 02:57:17.592033 | orchestrator | Monday 02 February 2026 02:57:16 +0000 (0:00:00.355) 0:02:15.083 ******* 2026-02-02 02:57:17.592044 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 02:57:17.592055 | orchestrator | 2026-02-02 02:57:17.592066 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-02 02:57:17.592077 | orchestrator | Monday 02 February 2026 02:57:16 +0000 (0:00:00.523) 0:02:15.606 ******* 2026-02-02 02:57:17.592087 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:57:17.592098 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:57:17.592109 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:57:17.592120 | orchestrator | 2026-02-02 02:57:17.592131 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-02 02:57:17.592142 | orchestrator | Monday 02 February 2026 02:57:17 +0000 (0:00:00.510) 0:02:16.116 ******* 2026-02-02 02:57:17.592152 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:57:17.592163 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:57:17.592173 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:57:17.592184 | orchestrator | 2026-02-02 02:57:17.592195 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-02 02:57:17.592205 | orchestrator | Monday 02 February 2026 02:57:17 +0000 (0:00:00.338) 0:02:16.455 ******* 2026-02-02 02:57:17.592230 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:59:00.422093 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:59:00.422174 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:59:00.422195 | orchestrator | 2026-02-02 02:59:00.422201 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-02 02:59:00.422206 | orchestrator | Monday 02 February 2026 02:57:17 +0000 (0:00:00.330) 0:02:16.786 ******* 2026-02-02 02:59:00.422211 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:59:00.422215 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:59:00.422219 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:59:00.422223 | orchestrator | 2026-02-02 02:59:00.422227 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-02 02:59:00.422231 | orchestrator | Monday 02 February 2026 02:57:18 +0000 (0:00:00.601) 0:02:17.388 ******* 2026-02-02 02:59:00.422235 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:59:00.422239 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:59:00.422242 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:59:00.422246 | orchestrator | 2026-02-02 02:59:00.422250 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-02 02:59:00.422254 | orchestrator | Monday 02 February 2026 02:57:19 +0000 (0:00:01.279) 0:02:18.667 ******* 2026-02-02 02:59:00.422258 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:59:00.422261 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:59:00.422265 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:59:00.422269 | orchestrator | 2026-02-02 02:59:00.422273 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-02 02:59:00.422276 | orchestrator | Monday 02 February 2026 02:57:20 +0000 (0:00:01.169) 0:02:19.837 ******* 2026-02-02 02:59:00.422280 | orchestrator | changed: [testbed-node-5] 2026-02-02 02:59:00.422284 | orchestrator | changed: [testbed-node-4] 2026-02-02 02:59:00.422288 | orchestrator | changed: [testbed-node-3] 2026-02-02 02:59:00.422291 | orchestrator | 2026-02-02 02:59:00.422295 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-02 02:59:00.422313 | orchestrator | 2026-02-02 02:59:00.422317 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-02 02:59:00.422321 | orchestrator | Monday 02 February 2026 02:57:30 +0000 (0:00:09.484) 0:02:29.321 ******* 2026-02-02 02:59:00.422325 | orchestrator | ok: [testbed-manager] 2026-02-02 02:59:00.422330 | orchestrator | 2026-02-02 02:59:00.422333 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-02 02:59:00.422337 | orchestrator | Monday 02 February 2026 02:57:31 +0000 (0:00:00.783) 0:02:30.105 ******* 2026-02-02 02:59:00.422341 | orchestrator | changed: [testbed-manager] 2026-02-02 02:59:00.422345 | orchestrator | 2026-02-02 02:59:00.422349 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-02 02:59:00.422352 | orchestrator | Monday 02 February 2026 02:57:31 +0000 (0:00:00.661) 0:02:30.767 ******* 2026-02-02 02:59:00.422356 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-02 02:59:00.422360 | orchestrator | 2026-02-02 02:59:00.422364 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-02 02:59:00.422367 | orchestrator | Monday 02 February 2026 02:57:32 +0000 (0:00:00.529) 0:02:31.296 ******* 2026-02-02 02:59:00.422371 | orchestrator | changed: [testbed-manager] 2026-02-02 02:59:00.422375 | orchestrator | 2026-02-02 02:59:00.422379 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-02 02:59:00.422382 | orchestrator | Monday 02 February 2026 02:57:33 +0000 (0:00:00.916) 0:02:32.212 ******* 2026-02-02 02:59:00.422386 | orchestrator | changed: [testbed-manager] 2026-02-02 02:59:00.422390 | orchestrator | 2026-02-02 02:59:00.422393 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-02 02:59:00.422397 | orchestrator | Monday 02 February 2026 02:57:33 +0000 (0:00:00.570) 0:02:32.782 ******* 2026-02-02 02:59:00.422401 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-02 02:59:00.422405 | orchestrator | 2026-02-02 02:59:00.422409 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-02 02:59:00.422413 | orchestrator | Monday 02 February 2026 02:57:35 +0000 (0:00:01.651) 0:02:34.434 ******* 2026-02-02 02:59:00.422416 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-02 02:59:00.422420 | orchestrator | 2026-02-02 02:59:00.422437 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-02 02:59:00.422441 | orchestrator | Monday 02 February 2026 02:57:36 +0000 (0:00:00.844) 0:02:35.278 ******* 2026-02-02 02:59:00.422444 | orchestrator | changed: [testbed-manager] 2026-02-02 02:59:00.422448 | orchestrator | 2026-02-02 02:59:00.422452 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-02 02:59:00.422456 | orchestrator | Monday 02 February 2026 02:57:36 +0000 (0:00:00.447) 0:02:35.726 ******* 2026-02-02 02:59:00.422459 | orchestrator | changed: [testbed-manager] 2026-02-02 02:59:00.422463 | orchestrator | 2026-02-02 02:59:00.422467 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-02 02:59:00.422470 | orchestrator | 2026-02-02 02:59:00.422474 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-02 02:59:00.422478 | orchestrator | Monday 02 February 2026 02:57:37 +0000 (0:00:00.455) 0:02:36.182 ******* 2026-02-02 02:59:00.422482 | orchestrator | ok: [testbed-manager] 2026-02-02 02:59:00.422486 | orchestrator | 2026-02-02 02:59:00.422490 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-02 02:59:00.422494 | orchestrator | Monday 02 February 2026 02:57:37 +0000 (0:00:00.172) 0:02:36.354 ******* 2026-02-02 02:59:00.422497 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 02:59:00.422502 | orchestrator | 2026-02-02 02:59:00.422506 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-02 02:59:00.422509 | orchestrator | Monday 02 February 2026 02:57:37 +0000 (0:00:00.461) 0:02:36.815 ******* 2026-02-02 02:59:00.422513 | orchestrator | ok: [testbed-manager] 2026-02-02 02:59:00.422517 | orchestrator | 2026-02-02 02:59:00.422524 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-02 02:59:00.422528 | orchestrator | Monday 02 February 2026 02:57:38 +0000 (0:00:00.800) 0:02:37.616 ******* 2026-02-02 02:59:00.422532 | orchestrator | ok: [testbed-manager] 2026-02-02 02:59:00.422535 | orchestrator | 2026-02-02 02:59:00.422548 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-02 02:59:00.422552 | orchestrator | Monday 02 February 2026 02:57:40 +0000 (0:00:01.782) 0:02:39.398 ******* 2026-02-02 02:59:00.422556 | orchestrator | changed: [testbed-manager] 2026-02-02 02:59:00.422560 | orchestrator | 2026-02-02 02:59:00.422564 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-02 02:59:00.422567 | orchestrator | Monday 02 February 2026 02:57:41 +0000 (0:00:00.789) 0:02:40.188 ******* 2026-02-02 02:59:00.422571 | orchestrator | ok: [testbed-manager] 2026-02-02 02:59:00.422575 | orchestrator | 2026-02-02 02:59:00.422578 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-02 02:59:00.422582 | orchestrator | Monday 02 February 2026 02:57:41 +0000 (0:00:00.464) 0:02:40.653 ******* 2026-02-02 02:59:00.422586 | orchestrator | changed: [testbed-manager] 2026-02-02 02:59:00.422590 | orchestrator | 2026-02-02 02:59:00.422593 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-02 02:59:00.422597 | orchestrator | Monday 02 February 2026 02:57:50 +0000 (0:00:08.739) 0:02:49.393 ******* 2026-02-02 02:59:00.422601 | orchestrator | changed: [testbed-manager] 2026-02-02 02:59:00.422604 | orchestrator | 2026-02-02 02:59:00.422608 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-02 02:59:00.422612 | orchestrator | Monday 02 February 2026 02:58:03 +0000 (0:00:12.797) 0:03:02.190 ******* 2026-02-02 02:59:00.422616 | orchestrator | ok: [testbed-manager] 2026-02-02 02:59:00.422619 | orchestrator | 2026-02-02 02:59:00.422623 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-02 02:59:00.422627 | orchestrator | 2026-02-02 02:59:00.422631 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-02 02:59:00.422634 | orchestrator | Monday 02 February 2026 02:58:04 +0000 (0:00:00.967) 0:03:03.157 ******* 2026-02-02 02:59:00.422638 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:59:00.422642 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:59:00.422646 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:59:00.422649 | orchestrator | 2026-02-02 02:59:00.422653 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-02 02:59:00.422657 | orchestrator | Monday 02 February 2026 02:58:04 +0000 (0:00:00.342) 0:03:03.500 ******* 2026-02-02 02:59:00.422661 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:59:00.422664 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:59:00.422668 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:59:00.422672 | orchestrator | 2026-02-02 02:59:00.422675 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-02 02:59:00.422679 | orchestrator | Monday 02 February 2026 02:58:04 +0000 (0:00:00.337) 0:03:03.837 ******* 2026-02-02 02:59:00.422683 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 02:59:00.422687 | orchestrator | 2026-02-02 02:59:00.422691 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-02 02:59:00.422694 | orchestrator | Monday 02 February 2026 02:58:05 +0000 (0:00:00.583) 0:03:04.421 ******* 2026-02-02 02:59:00.422698 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-02 02:59:00.422702 | orchestrator | 2026-02-02 02:59:00.422706 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-02 02:59:00.422709 | orchestrator | Monday 02 February 2026 02:58:06 +0000 (0:00:01.062) 0:03:05.483 ******* 2026-02-02 02:59:00.422713 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 02:59:00.422717 | orchestrator | 2026-02-02 02:59:00.422721 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-02 02:59:00.422772 | orchestrator | Monday 02 February 2026 02:58:07 +0000 (0:00:00.840) 0:03:06.324 ******* 2026-02-02 02:59:00.422776 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:59:00.422780 | orchestrator | 2026-02-02 02:59:00.422783 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-02 02:59:00.422787 | orchestrator | Monday 02 February 2026 02:58:07 +0000 (0:00:00.135) 0:03:06.460 ******* 2026-02-02 02:59:00.422791 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 02:59:00.422795 | orchestrator | 2026-02-02 02:59:00.422798 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-02 02:59:00.422802 | orchestrator | Monday 02 February 2026 02:58:08 +0000 (0:00:01.030) 0:03:07.491 ******* 2026-02-02 02:59:00.422806 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:59:00.422810 | orchestrator | 2026-02-02 02:59:00.422813 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-02 02:59:00.422817 | orchestrator | Monday 02 February 2026 02:58:08 +0000 (0:00:00.110) 0:03:07.602 ******* 2026-02-02 02:59:00.422821 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:59:00.422824 | orchestrator | 2026-02-02 02:59:00.422828 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-02 02:59:00.422832 | orchestrator | Monday 02 February 2026 02:58:08 +0000 (0:00:00.139) 0:03:07.741 ******* 2026-02-02 02:59:00.422836 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:59:00.422839 | orchestrator | 2026-02-02 02:59:00.422843 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-02 02:59:00.422850 | orchestrator | Monday 02 February 2026 02:58:08 +0000 (0:00:00.116) 0:03:07.858 ******* 2026-02-02 02:59:00.422853 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:59:00.422857 | orchestrator | 2026-02-02 02:59:00.422861 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-02 02:59:00.422865 | orchestrator | Monday 02 February 2026 02:58:08 +0000 (0:00:00.112) 0:03:07.970 ******* 2026-02-02 02:59:00.422869 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-02 02:59:00.422872 | orchestrator | 2026-02-02 02:59:00.422876 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-02 02:59:00.422880 | orchestrator | Monday 02 February 2026 02:58:14 +0000 (0:00:05.246) 0:03:13.217 ******* 2026-02-02 02:59:00.422884 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-02 02:59:00.422888 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-02 02:59:00.422895 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-02 02:59:24.441563 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-02 02:59:24.441689 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-02 02:59:24.441711 | orchestrator | 2026-02-02 02:59:24.441730 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-02 02:59:24.441748 | orchestrator | Monday 02 February 2026 02:59:00 +0000 (0:00:46.261) 0:03:59.479 ******* 2026-02-02 02:59:24.441764 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 02:59:24.441780 | orchestrator | 2026-02-02 02:59:24.441797 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-02 02:59:24.441813 | orchestrator | Monday 02 February 2026 02:59:01 +0000 (0:00:01.258) 0:04:00.737 ******* 2026-02-02 02:59:24.441830 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-02 02:59:24.441845 | orchestrator | 2026-02-02 02:59:24.441862 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-02 02:59:24.441879 | orchestrator | Monday 02 February 2026 02:59:03 +0000 (0:00:01.609) 0:04:02.347 ******* 2026-02-02 02:59:24.441895 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-02 02:59:24.441942 | orchestrator | 2026-02-02 02:59:24.441958 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-02 02:59:24.441975 | orchestrator | Monday 02 February 2026 02:59:04 +0000 (0:00:01.339) 0:04:03.686 ******* 2026-02-02 02:59:24.442095 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:59:24.442165 | orchestrator | 2026-02-02 02:59:24.442186 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-02 02:59:24.442205 | orchestrator | Monday 02 February 2026 02:59:04 +0000 (0:00:00.136) 0:04:03.823 ******* 2026-02-02 02:59:24.442225 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-02 02:59:24.442246 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-02 02:59:24.442263 | orchestrator | 2026-02-02 02:59:24.442283 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-02 02:59:24.442304 | orchestrator | Monday 02 February 2026 02:59:06 +0000 (0:00:01.917) 0:04:05.740 ******* 2026-02-02 02:59:24.442324 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:59:24.442345 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:59:24.442365 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:59:24.442384 | orchestrator | 2026-02-02 02:59:24.442404 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-02 02:59:24.442425 | orchestrator | Monday 02 February 2026 02:59:06 +0000 (0:00:00.325) 0:04:06.065 ******* 2026-02-02 02:59:24.442445 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:59:24.442463 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:59:24.442479 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:59:24.442574 | orchestrator | 2026-02-02 02:59:24.442617 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-02 02:59:24.442633 | orchestrator | 2026-02-02 02:59:24.442649 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-02 02:59:24.442664 | orchestrator | Monday 02 February 2026 02:59:07 +0000 (0:00:00.872) 0:04:06.938 ******* 2026-02-02 02:59:24.442679 | orchestrator | ok: [testbed-manager] 2026-02-02 02:59:24.442696 | orchestrator | 2026-02-02 02:59:24.442713 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-02 02:59:24.442730 | orchestrator | Monday 02 February 2026 02:59:08 +0000 (0:00:00.363) 0:04:07.301 ******* 2026-02-02 02:59:24.442746 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 02:59:24.442763 | orchestrator | 2026-02-02 02:59:24.442780 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-02 02:59:24.442798 | orchestrator | Monday 02 February 2026 02:59:08 +0000 (0:00:00.265) 0:04:07.567 ******* 2026-02-02 02:59:24.442844 | orchestrator | changed: [testbed-manager] 2026-02-02 02:59:24.442861 | orchestrator | 2026-02-02 02:59:24.442876 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-02 02:59:24.442891 | orchestrator | 2026-02-02 02:59:24.442907 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-02 02:59:24.442924 | orchestrator | Monday 02 February 2026 02:59:13 +0000 (0:00:05.491) 0:04:13.059 ******* 2026-02-02 02:59:24.442940 | orchestrator | ok: [testbed-node-3] 2026-02-02 02:59:24.442956 | orchestrator | ok: [testbed-node-4] 2026-02-02 02:59:24.442972 | orchestrator | ok: [testbed-node-5] 2026-02-02 02:59:24.442988 | orchestrator | ok: [testbed-node-0] 2026-02-02 02:59:24.443004 | orchestrator | ok: [testbed-node-1] 2026-02-02 02:59:24.443021 | orchestrator | ok: [testbed-node-2] 2026-02-02 02:59:24.443038 | orchestrator | 2026-02-02 02:59:24.443054 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-02 02:59:24.443069 | orchestrator | Monday 02 February 2026 02:59:14 +0000 (0:00:00.657) 0:04:13.717 ******* 2026-02-02 02:59:24.443079 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-02 02:59:24.443089 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-02 02:59:24.443126 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-02 02:59:24.443141 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-02 02:59:24.443173 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-02 02:59:24.443188 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-02 02:59:24.443202 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-02 02:59:24.443225 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-02 02:59:24.443243 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-02 02:59:24.443287 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-02 02:59:24.443342 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-02 02:59:24.443361 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-02 02:59:24.443379 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-02 02:59:24.443395 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-02 02:59:24.443409 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-02 02:59:24.443459 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-02 02:59:24.443479 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-02 02:59:24.443495 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-02 02:59:24.443511 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-02 02:59:24.443528 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-02 02:59:24.443543 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-02 02:59:24.443558 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-02 02:59:24.443575 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-02 02:59:24.443592 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-02 02:59:24.443609 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-02 02:59:24.443625 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-02 02:59:24.443641 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-02 02:59:24.443657 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-02 02:59:24.443672 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-02 02:59:24.443688 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-02 02:59:24.443703 | orchestrator | 2026-02-02 02:59:24.443720 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-02 02:59:24.443738 | orchestrator | Monday 02 February 2026 02:59:23 +0000 (0:00:08.484) 0:04:22.202 ******* 2026-02-02 02:59:24.443754 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:59:24.443770 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:59:24.443788 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:59:24.443805 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:59:24.443821 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:59:24.443837 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:59:24.443853 | orchestrator | 2026-02-02 02:59:24.443868 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-02 02:59:24.443885 | orchestrator | Monday 02 February 2026 02:59:23 +0000 (0:00:00.558) 0:04:22.761 ******* 2026-02-02 02:59:24.443902 | orchestrator | skipping: [testbed-node-3] 2026-02-02 02:59:24.443934 | orchestrator | skipping: [testbed-node-4] 2026-02-02 02:59:24.443952 | orchestrator | skipping: [testbed-node-5] 2026-02-02 02:59:24.443968 | orchestrator | skipping: [testbed-node-0] 2026-02-02 02:59:24.443985 | orchestrator | skipping: [testbed-node-1] 2026-02-02 02:59:24.444001 | orchestrator | skipping: [testbed-node-2] 2026-02-02 02:59:24.444019 | orchestrator | 2026-02-02 02:59:24.444035 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:59:24.444052 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:59:24.444072 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-02 02:59:24.444089 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-02 02:59:24.444167 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-02 02:59:24.444185 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-02 02:59:24.444201 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-02 02:59:24.444215 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-02 02:59:24.444224 | orchestrator | 2026-02-02 02:59:24.444234 | orchestrator | 2026-02-02 02:59:24.444245 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:59:24.444254 | orchestrator | Monday 02 February 2026 02:59:24 +0000 (0:00:00.726) 0:04:23.487 ******* 2026-02-02 02:59:24.444277 | orchestrator | =============================================================================== 2026-02-02 02:59:24.916330 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.64s 2026-02-02 02:59:24.916454 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 46.26s 2026-02-02 02:59:24.916470 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.72s 2026-02-02 02:59:24.916481 | orchestrator | kubectl : Install required packages ------------------------------------ 12.80s 2026-02-02 02:59:24.916492 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.48s 2026-02-02 02:59:24.916502 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.74s 2026-02-02 02:59:24.916513 | orchestrator | Manage labels ----------------------------------------------------------- 8.48s 2026-02-02 02:59:24.916523 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.49s 2026-02-02 02:59:24.916533 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.28s 2026-02-02 02:59:24.916543 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.25s 2026-02-02 02:59:24.916554 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.95s 2026-02-02 02:59:24.916566 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.61s 2026-02-02 02:59:24.916576 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.97s 2026-02-02 02:59:24.916586 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.92s 2026-02-02 02:59:24.916597 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.89s 2026-02-02 02:59:24.916607 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.78s 2026-02-02 02:59:24.916617 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.65s 2026-02-02 02:59:24.916653 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.61s 2026-02-02 02:59:24.916664 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.56s 2026-02-02 02:59:24.916674 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.51s 2026-02-02 02:59:25.282418 | orchestrator | + osism apply copy-kubeconfig 2026-02-02 02:59:37.529146 | orchestrator | 2026-02-02 02:59:37 | INFO  | Task c6c06d44-8f70-403c-916a-f81905716082 (copy-kubeconfig) was prepared for execution. 2026-02-02 02:59:37.529253 | orchestrator | 2026-02-02 02:59:37 | INFO  | It takes a moment until task c6c06d44-8f70-403c-916a-f81905716082 (copy-kubeconfig) has been started and output is visible here. 2026-02-02 02:59:44.716867 | orchestrator | 2026-02-02 02:59:44.716961 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-02 02:59:44.716974 | orchestrator | 2026-02-02 02:59:44.716983 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-02 02:59:44.716992 | orchestrator | Monday 02 February 2026 02:59:41 +0000 (0:00:00.177) 0:00:00.177 ******* 2026-02-02 02:59:44.717000 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-02 02:59:44.717009 | orchestrator | 2026-02-02 02:59:44.717017 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-02 02:59:44.717025 | orchestrator | Monday 02 February 2026 02:59:42 +0000 (0:00:00.715) 0:00:00.893 ******* 2026-02-02 02:59:44.717126 | orchestrator | changed: [testbed-manager] 2026-02-02 02:59:44.717138 | orchestrator | 2026-02-02 02:59:44.717146 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-02 02:59:44.717155 | orchestrator | Monday 02 February 2026 02:59:43 +0000 (0:00:01.271) 0:00:02.164 ******* 2026-02-02 02:59:44.717167 | orchestrator | changed: [testbed-manager] 2026-02-02 02:59:44.717176 | orchestrator | 2026-02-02 02:59:44.717188 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 02:59:44.717197 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 02:59:44.717207 | orchestrator | 2026-02-02 02:59:44.717215 | orchestrator | 2026-02-02 02:59:44.717223 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 02:59:44.717231 | orchestrator | Monday 02 February 2026 02:59:44 +0000 (0:00:00.474) 0:00:02.638 ******* 2026-02-02 02:59:44.717239 | orchestrator | =============================================================================== 2026-02-02 02:59:44.717247 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.27s 2026-02-02 02:59:44.717255 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2026-02-02 02:59:44.717263 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.47s 2026-02-02 02:59:45.125214 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-02 02:59:57.308654 | orchestrator | 2026-02-02 02:59:57 | INFO  | Task 7355fab5-8232-4ac4-8c36-9b84c84aaaae (openstackclient) was prepared for execution. 2026-02-02 02:59:57.308776 | orchestrator | 2026-02-02 02:59:57 | INFO  | It takes a moment until task 7355fab5-8232-4ac4-8c36-9b84c84aaaae (openstackclient) has been started and output is visible here. 2026-02-02 03:00:43.807730 | orchestrator | 2026-02-02 03:00:43.807832 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-02 03:00:43.807846 | orchestrator | 2026-02-02 03:00:43.807855 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-02 03:00:43.807944 | orchestrator | Monday 02 February 2026 03:00:01 +0000 (0:00:00.243) 0:00:00.243 ******* 2026-02-02 03:00:43.807963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-02 03:00:43.807984 | orchestrator | 2026-02-02 03:00:43.808031 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-02 03:00:43.808046 | orchestrator | Monday 02 February 2026 03:00:01 +0000 (0:00:00.221) 0:00:00.465 ******* 2026-02-02 03:00:43.808058 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-02 03:00:43.808073 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-02 03:00:43.808087 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-02 03:00:43.808101 | orchestrator | 2026-02-02 03:00:43.808115 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-02 03:00:43.808127 | orchestrator | Monday 02 February 2026 03:00:03 +0000 (0:00:01.328) 0:00:01.793 ******* 2026-02-02 03:00:43.808141 | orchestrator | changed: [testbed-manager] 2026-02-02 03:00:43.808150 | orchestrator | 2026-02-02 03:00:43.808158 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-02 03:00:43.808166 | orchestrator | Monday 02 February 2026 03:00:04 +0000 (0:00:01.526) 0:00:03.320 ******* 2026-02-02 03:00:43.808174 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-02 03:00:43.808183 | orchestrator | ok: [testbed-manager] 2026-02-02 03:00:43.808192 | orchestrator | 2026-02-02 03:00:43.808200 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-02 03:00:43.808208 | orchestrator | Monday 02 February 2026 03:00:38 +0000 (0:00:33.663) 0:00:36.983 ******* 2026-02-02 03:00:43.808216 | orchestrator | changed: [testbed-manager] 2026-02-02 03:00:43.808224 | orchestrator | 2026-02-02 03:00:43.808231 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-02 03:00:43.808239 | orchestrator | Monday 02 February 2026 03:00:39 +0000 (0:00:00.931) 0:00:37.915 ******* 2026-02-02 03:00:43.808247 | orchestrator | ok: [testbed-manager] 2026-02-02 03:00:43.808255 | orchestrator | 2026-02-02 03:00:43.808263 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-02 03:00:43.808271 | orchestrator | Monday 02 February 2026 03:00:40 +0000 (0:00:00.671) 0:00:38.587 ******* 2026-02-02 03:00:43.808279 | orchestrator | changed: [testbed-manager] 2026-02-02 03:00:43.808288 | orchestrator | 2026-02-02 03:00:43.808298 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-02 03:00:43.808308 | orchestrator | Monday 02 February 2026 03:00:41 +0000 (0:00:01.432) 0:00:40.019 ******* 2026-02-02 03:00:43.808317 | orchestrator | changed: [testbed-manager] 2026-02-02 03:00:43.808327 | orchestrator | 2026-02-02 03:00:43.808337 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-02 03:00:43.808346 | orchestrator | Monday 02 February 2026 03:00:42 +0000 (0:00:00.821) 0:00:40.841 ******* 2026-02-02 03:00:43.808356 | orchestrator | changed: [testbed-manager] 2026-02-02 03:00:43.808365 | orchestrator | 2026-02-02 03:00:43.808375 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-02 03:00:43.808384 | orchestrator | Monday 02 February 2026 03:00:42 +0000 (0:00:00.590) 0:00:41.431 ******* 2026-02-02 03:00:43.808394 | orchestrator | ok: [testbed-manager] 2026-02-02 03:00:43.808404 | orchestrator | 2026-02-02 03:00:43.808413 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:00:43.808423 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:00:43.808433 | orchestrator | 2026-02-02 03:00:43.808442 | orchestrator | 2026-02-02 03:00:43.808451 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:00:43.808460 | orchestrator | Monday 02 February 2026 03:00:43 +0000 (0:00:00.437) 0:00:41.869 ******* 2026-02-02 03:00:43.808470 | orchestrator | =============================================================================== 2026-02-02 03:00:43.808479 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.66s 2026-02-02 03:00:43.808489 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.53s 2026-02-02 03:00:43.808506 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.43s 2026-02-02 03:00:43.808516 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.33s 2026-02-02 03:00:43.808525 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.93s 2026-02-02 03:00:43.808535 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.82s 2026-02-02 03:00:43.808544 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.67s 2026-02-02 03:00:43.808554 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.59s 2026-02-02 03:00:43.808563 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.44s 2026-02-02 03:00:43.808573 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.22s 2026-02-02 03:00:46.417109 | orchestrator | 2026-02-02 03:00:46 | INFO  | Task 8f1ee5d5-ea57-4cd8-904d-07baffa5b773 (common) was prepared for execution. 2026-02-02 03:00:46.417209 | orchestrator | 2026-02-02 03:00:46 | INFO  | It takes a moment until task 8f1ee5d5-ea57-4cd8-904d-07baffa5b773 (common) has been started and output is visible here. 2026-02-02 03:00:59.201337 | orchestrator | 2026-02-02 03:00:59.201451 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-02 03:00:59.201469 | orchestrator | 2026-02-02 03:00:59.201482 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-02 03:00:59.201494 | orchestrator | Monday 02 February 2026 03:00:50 +0000 (0:00:00.285) 0:00:00.285 ******* 2026-02-02 03:00:59.201506 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:00:59.201518 | orchestrator | 2026-02-02 03:00:59.201529 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-02 03:00:59.201540 | orchestrator | Monday 02 February 2026 03:00:52 +0000 (0:00:01.387) 0:00:01.672 ******* 2026-02-02 03:00:59.201551 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 03:00:59.201562 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 03:00:59.201573 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 03:00:59.201584 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 03:00:59.201595 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 03:00:59.201606 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 03:00:59.201616 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 03:00:59.201627 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 03:00:59.201637 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 03:00:59.201668 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 03:00:59.201679 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 03:00:59.201691 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 03:00:59.201702 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 03:00:59.201713 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 03:00:59.201724 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 03:00:59.201735 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 03:00:59.201745 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 03:00:59.201779 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 03:00:59.201791 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 03:00:59.201802 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 03:00:59.201841 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 03:00:59.201862 | orchestrator | 2026-02-02 03:00:59.201882 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-02 03:00:59.201902 | orchestrator | Monday 02 February 2026 03:00:54 +0000 (0:00:02.609) 0:00:04.282 ******* 2026-02-02 03:00:59.201925 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:00:59.201946 | orchestrator | 2026-02-02 03:00:59.201961 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-02 03:00:59.201980 | orchestrator | Monday 02 February 2026 03:00:56 +0000 (0:00:01.376) 0:00:05.658 ******* 2026-02-02 03:00:59.201997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:00:59.202079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:00:59.202121 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:00:59.202137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:00:59.202150 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:00:59.202164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:00:59.202197 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:00:59.202212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:00:59.202226 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:00:59.202246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:00.136410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:00.136510 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:00.136547 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:00.136560 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:00.136572 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:00.136604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:00.136616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:00.136654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:00.136666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:00.136678 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:00.136696 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:00.136708 | orchestrator | 2026-02-02 03:01:00.136721 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-02 03:01:00.136734 | orchestrator | Monday 02 February 2026 03:00:59 +0000 (0:00:03.459) 0:00:09.117 ******* 2026-02-02 03:01:00.136748 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:00.136762 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:00.136776 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:00.136791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:00.136859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:00.758940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:00.759095 | orchestrator | skipping: [testbed-manager] 2026-02-02 03:01:00.759160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:00.759176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:00.759189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:00.759200 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:01:00.759212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:00.759229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:00.759241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:00.759253 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:01:00.759292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:00.759316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:00.759328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:00.759339 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:01:00.759350 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:01:00.759361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:00.759373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:00.759384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:00.759395 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:01:00.759408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:00.759427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:01.596942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:01.597041 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:01:01.597057 | orchestrator | 2026-02-02 03:01:01.597071 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-02 03:01:01.597084 | orchestrator | Monday 02 February 2026 03:01:00 +0000 (0:00:00.912) 0:00:10.029 ******* 2026-02-02 03:01:01.597097 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:01.597110 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:01.597122 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:01.597152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:01.597170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:01.597205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:01.597218 | orchestrator | skipping: [testbed-manager] 2026-02-02 03:01:01.597229 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:01:01.597268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:01.597280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:01.597292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:01.597303 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:01:01.597315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:01.597326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:01.597342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:01.597362 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:01:01.597373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:01.597404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:06.849756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:06.850000 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:01:06.850130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:06.850151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:06.850172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:06.850185 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:01:06.850196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 03:01:06.850249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:06.850262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:06.850273 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:01:06.850285 | orchestrator | 2026-02-02 03:01:06.850298 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-02 03:01:06.850311 | orchestrator | Monday 02 February 2026 03:01:02 +0000 (0:00:01.779) 0:00:11.809 ******* 2026-02-02 03:01:06.850322 | orchestrator | skipping: [testbed-manager] 2026-02-02 03:01:06.850333 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:01:06.850344 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:01:06.850355 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:01:06.850384 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:01:06.850408 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:01:06.850420 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:01:06.850431 | orchestrator | 2026-02-02 03:01:06.850442 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-02 03:01:06.850453 | orchestrator | Monday 02 February 2026 03:01:03 +0000 (0:00:00.712) 0:00:12.521 ******* 2026-02-02 03:01:06.850464 | orchestrator | skipping: [testbed-manager] 2026-02-02 03:01:06.850475 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:01:06.850486 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:01:06.850497 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:01:06.850508 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:01:06.850519 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:01:06.850530 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:01:06.850541 | orchestrator | 2026-02-02 03:01:06.850552 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-02 03:01:06.850563 | orchestrator | Monday 02 February 2026 03:01:04 +0000 (0:00:00.887) 0:00:13.409 ******* 2026-02-02 03:01:06.850576 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:06.850599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:06.850619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:06.850635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:06.850647 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:06.850659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:06.850680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:09.485882 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.485963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.485991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.486010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.486060 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.486067 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.486091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.486099 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.486107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.486119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.486126 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.486132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.486138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.486144 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:09.486151 | orchestrator | 2026-02-02 03:01:09.486158 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-02 03:01:09.486165 | orchestrator | Monday 02 February 2026 03:01:07 +0000 (0:00:03.390) 0:00:16.799 ******* 2026-02-02 03:01:09.486171 | orchestrator | [WARNING]: Skipped 2026-02-02 03:01:09.486178 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-02 03:01:09.486186 | orchestrator | to this access issue: 2026-02-02 03:01:09.486192 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-02 03:01:09.486198 | orchestrator | directory 2026-02-02 03:01:09.486204 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 03:01:09.486211 | orchestrator | 2026-02-02 03:01:09.486217 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-02 03:01:09.486222 | orchestrator | Monday 02 February 2026 03:01:08 +0000 (0:00:01.005) 0:00:17.805 ******* 2026-02-02 03:01:09.486228 | orchestrator | [WARNING]: Skipped 2026-02-02 03:01:09.486238 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-02 03:01:19.528461 | orchestrator | to this access issue: 2026-02-02 03:01:19.528532 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-02 03:01:19.528539 | orchestrator | directory 2026-02-02 03:01:19.528544 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 03:01:19.528549 | orchestrator | 2026-02-02 03:01:19.528554 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-02 03:01:19.528559 | orchestrator | Monday 02 February 2026 03:01:09 +0000 (0:00:01.245) 0:00:19.050 ******* 2026-02-02 03:01:19.528580 | orchestrator | [WARNING]: Skipped 2026-02-02 03:01:19.528585 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-02 03:01:19.528589 | orchestrator | to this access issue: 2026-02-02 03:01:19.528593 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-02 03:01:19.528597 | orchestrator | directory 2026-02-02 03:01:19.528601 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 03:01:19.528604 | orchestrator | 2026-02-02 03:01:19.528609 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-02 03:01:19.528613 | orchestrator | Monday 02 February 2026 03:01:10 +0000 (0:00:00.847) 0:00:19.898 ******* 2026-02-02 03:01:19.528617 | orchestrator | [WARNING]: Skipped 2026-02-02 03:01:19.528620 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-02 03:01:19.528624 | orchestrator | to this access issue: 2026-02-02 03:01:19.528628 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-02 03:01:19.528632 | orchestrator | directory 2026-02-02 03:01:19.528635 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 03:01:19.528639 | orchestrator | 2026-02-02 03:01:19.528643 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-02 03:01:19.528647 | orchestrator | Monday 02 February 2026 03:01:11 +0000 (0:00:00.898) 0:00:20.796 ******* 2026-02-02 03:01:19.528651 | orchestrator | changed: [testbed-manager] 2026-02-02 03:01:19.528655 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:01:19.528658 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:01:19.528662 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:01:19.528666 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:01:19.528670 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:01:19.528686 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:01:19.528690 | orchestrator | 2026-02-02 03:01:19.528694 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-02 03:01:19.528698 | orchestrator | Monday 02 February 2026 03:01:14 +0000 (0:00:02.522) 0:00:23.318 ******* 2026-02-02 03:01:19.528702 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 03:01:19.528707 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 03:01:19.528710 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 03:01:19.528714 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 03:01:19.528718 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 03:01:19.528722 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 03:01:19.528728 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 03:01:19.528732 | orchestrator | 2026-02-02 03:01:19.528736 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-02 03:01:19.528740 | orchestrator | Monday 02 February 2026 03:01:16 +0000 (0:00:02.302) 0:00:25.621 ******* 2026-02-02 03:01:19.528744 | orchestrator | changed: [testbed-manager] 2026-02-02 03:01:19.528748 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:01:19.528752 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:01:19.528755 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:01:19.528806 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:01:19.528812 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:01:19.528818 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:01:19.528824 | orchestrator | 2026-02-02 03:01:19.528827 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-02 03:01:19.528836 | orchestrator | Monday 02 February 2026 03:01:18 +0000 (0:00:01.892) 0:00:27.514 ******* 2026-02-02 03:01:19.528842 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:19.528857 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:19.528862 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:19.528866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:19.528870 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:19.528877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:19.528882 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:19.528889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:19.528899 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:19.528909 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:25.506656 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:25.506833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:25.506892 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:25.506936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:25.506979 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:25.506991 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:25.507002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:01:25.507032 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:25.507044 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:25.507054 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:25.507065 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:25.507077 | orchestrator | 2026-02-02 03:01:25.507089 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-02 03:01:25.507101 | orchestrator | Monday 02 February 2026 03:01:19 +0000 (0:00:01.527) 0:00:29.041 ******* 2026-02-02 03:01:25.507111 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 03:01:25.507121 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 03:01:25.507137 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 03:01:25.507147 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 03:01:25.507157 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 03:01:25.507167 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 03:01:25.507177 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 03:01:25.507186 | orchestrator | 2026-02-02 03:01:25.507209 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-02 03:01:25.507221 | orchestrator | Monday 02 February 2026 03:01:21 +0000 (0:00:01.986) 0:00:31.028 ******* 2026-02-02 03:01:25.507233 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 03:01:25.507245 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 03:01:25.507258 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 03:01:25.507275 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 03:01:25.507287 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 03:01:25.507298 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 03:01:25.507310 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 03:01:25.507321 | orchestrator | 2026-02-02 03:01:25.507333 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-02 03:01:25.507344 | orchestrator | Monday 02 February 2026 03:01:23 +0000 (0:00:01.692) 0:00:32.720 ******* 2026-02-02 03:01:25.507356 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:25.507377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:26.115261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:26.115360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:26.115412 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:26.115448 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:26.115468 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 03:01:26.115487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:26.115505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:26.115541 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:26.115588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:26.115627 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:26.115645 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:26.115799 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:26.115816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:26.115830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:01:26.115860 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:02:46.718722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:02:46.718827 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:02:46.718835 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:02:46.718851 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:02:46.718857 | orchestrator | 2026-02-02 03:02:46.718863 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-02 03:02:46.718869 | orchestrator | Monday 02 February 2026 03:01:26 +0000 (0:00:02.671) 0:00:35.392 ******* 2026-02-02 03:02:46.718874 | orchestrator | changed: [testbed-manager] 2026-02-02 03:02:46.718880 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:02:46.718885 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:02:46.718889 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:02:46.718895 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:02:46.718899 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:02:46.718904 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:02:46.718908 | orchestrator | 2026-02-02 03:02:46.718913 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-02 03:02:46.718918 | orchestrator | Monday 02 February 2026 03:01:27 +0000 (0:00:01.389) 0:00:36.781 ******* 2026-02-02 03:02:46.718923 | orchestrator | changed: [testbed-manager] 2026-02-02 03:02:46.718927 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:02:46.718932 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:02:46.718936 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:02:46.718941 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:02:46.718945 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:02:46.718950 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:02:46.718955 | orchestrator | 2026-02-02 03:02:46.718959 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 03:02:46.718964 | orchestrator | Monday 02 February 2026 03:01:28 +0000 (0:00:01.099) 0:00:37.880 ******* 2026-02-02 03:02:46.718969 | orchestrator | 2026-02-02 03:02:46.718973 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 03:02:46.718978 | orchestrator | Monday 02 February 2026 03:01:28 +0000 (0:00:00.067) 0:00:37.948 ******* 2026-02-02 03:02:46.718982 | orchestrator | 2026-02-02 03:02:46.718987 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 03:02:46.718992 | orchestrator | Monday 02 February 2026 03:01:28 +0000 (0:00:00.065) 0:00:38.013 ******* 2026-02-02 03:02:46.718996 | orchestrator | 2026-02-02 03:02:46.719001 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 03:02:46.719006 | orchestrator | Monday 02 February 2026 03:01:28 +0000 (0:00:00.064) 0:00:38.078 ******* 2026-02-02 03:02:46.719010 | orchestrator | 2026-02-02 03:02:46.719015 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 03:02:46.719024 | orchestrator | Monday 02 February 2026 03:01:29 +0000 (0:00:00.240) 0:00:38.319 ******* 2026-02-02 03:02:46.719029 | orchestrator | 2026-02-02 03:02:46.719033 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 03:02:46.719038 | orchestrator | Monday 02 February 2026 03:01:29 +0000 (0:00:00.066) 0:00:38.385 ******* 2026-02-02 03:02:46.719043 | orchestrator | 2026-02-02 03:02:46.719048 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 03:02:46.719052 | orchestrator | Monday 02 February 2026 03:01:29 +0000 (0:00:00.067) 0:00:38.453 ******* 2026-02-02 03:02:46.719057 | orchestrator | 2026-02-02 03:02:46.719061 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-02 03:02:46.719066 | orchestrator | Monday 02 February 2026 03:01:29 +0000 (0:00:00.090) 0:00:38.543 ******* 2026-02-02 03:02:46.719071 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:02:46.719075 | orchestrator | changed: [testbed-manager] 2026-02-02 03:02:46.719080 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:02:46.719085 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:02:46.719089 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:02:46.719104 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:02:46.719109 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:02:46.719114 | orchestrator | 2026-02-02 03:02:46.719118 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-02 03:02:46.719123 | orchestrator | Monday 02 February 2026 03:02:07 +0000 (0:00:37.918) 0:01:16.462 ******* 2026-02-02 03:02:46.719128 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:02:46.719132 | orchestrator | changed: [testbed-manager] 2026-02-02 03:02:46.719137 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:02:46.719141 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:02:46.719146 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:02:46.719151 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:02:46.719155 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:02:46.719160 | orchestrator | 2026-02-02 03:02:46.719164 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-02 03:02:46.719169 | orchestrator | Monday 02 February 2026 03:02:35 +0000 (0:00:28.770) 0:01:45.232 ******* 2026-02-02 03:02:46.719174 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:02:46.719179 | orchestrator | ok: [testbed-manager] 2026-02-02 03:02:46.719184 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:02:46.719189 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:02:46.719193 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:02:46.719198 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:02:46.719202 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:02:46.719207 | orchestrator | 2026-02-02 03:02:46.719212 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-02 03:02:46.719216 | orchestrator | Monday 02 February 2026 03:02:37 +0000 (0:00:01.923) 0:01:47.156 ******* 2026-02-02 03:02:46.719221 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:02:46.719226 | orchestrator | changed: [testbed-manager] 2026-02-02 03:02:46.719230 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:02:46.719235 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:02:46.719239 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:02:46.719244 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:02:46.719248 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:02:46.719253 | orchestrator | 2026-02-02 03:02:46.719258 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:02:46.719264 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 03:02:46.719270 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 03:02:46.719280 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 03:02:46.719299 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 03:02:46.719305 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 03:02:46.719310 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 03:02:46.719316 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 03:02:46.719321 | orchestrator | 2026-02-02 03:02:46.719327 | orchestrator | 2026-02-02 03:02:46.719332 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:02:46.719338 | orchestrator | Monday 02 February 2026 03:02:46 +0000 (0:00:08.812) 0:01:55.969 ******* 2026-02-02 03:02:46.719343 | orchestrator | =============================================================================== 2026-02-02 03:02:46.719349 | orchestrator | common : Restart fluentd container ------------------------------------- 37.92s 2026-02-02 03:02:46.719354 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 28.77s 2026-02-02 03:02:46.719360 | orchestrator | common : Restart cron container ----------------------------------------- 8.81s 2026-02-02 03:02:46.719365 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.46s 2026-02-02 03:02:46.719370 | orchestrator | common : Copying over config.json files for services -------------------- 3.39s 2026-02-02 03:02:46.719376 | orchestrator | common : Check common containers ---------------------------------------- 2.67s 2026-02-02 03:02:46.719381 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.61s 2026-02-02 03:02:46.719386 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.52s 2026-02-02 03:02:46.719392 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.30s 2026-02-02 03:02:46.719397 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.99s 2026-02-02 03:02:46.719402 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.92s 2026-02-02 03:02:46.719408 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.89s 2026-02-02 03:02:46.719413 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.78s 2026-02-02 03:02:46.719419 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.69s 2026-02-02 03:02:46.719424 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.53s 2026-02-02 03:02:46.719430 | orchestrator | common : Creating log volume -------------------------------------------- 1.39s 2026-02-02 03:02:46.719438 | orchestrator | common : include_tasks -------------------------------------------------- 1.39s 2026-02-02 03:02:47.185242 | orchestrator | common : include_tasks -------------------------------------------------- 1.38s 2026-02-02 03:02:47.185329 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.25s 2026-02-02 03:02:47.185341 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.10s 2026-02-02 03:02:49.647913 | orchestrator | 2026-02-02 03:02:49 | INFO  | Task 3e8fbe22-98ad-427b-8b4c-650482904805 (loadbalancer) was prepared for execution. 2026-02-02 03:02:49.648038 | orchestrator | 2026-02-02 03:02:49 | INFO  | It takes a moment until task 3e8fbe22-98ad-427b-8b4c-650482904805 (loadbalancer) has been started and output is visible here. 2026-02-02 03:03:04.829969 | orchestrator | 2026-02-02 03:03:04.830163 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 03:03:04.830189 | orchestrator | 2026-02-02 03:03:04.830203 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 03:03:04.830216 | orchestrator | Monday 02 February 2026 03:02:54 +0000 (0:00:00.285) 0:00:00.285 ******* 2026-02-02 03:03:04.830259 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:03:04.830277 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:03:04.830290 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:03:04.830304 | orchestrator | 2026-02-02 03:03:04.830318 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 03:03:04.830332 | orchestrator | Monday 02 February 2026 03:02:54 +0000 (0:00:00.310) 0:00:00.597 ******* 2026-02-02 03:03:04.830347 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-02 03:03:04.830360 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-02 03:03:04.830373 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-02 03:03:04.830387 | orchestrator | 2026-02-02 03:03:04.830401 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-02 03:03:04.830414 | orchestrator | 2026-02-02 03:03:04.830427 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-02 03:03:04.830457 | orchestrator | Monday 02 February 2026 03:02:54 +0000 (0:00:00.476) 0:00:01.073 ******* 2026-02-02 03:03:04.830564 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:03:04.830583 | orchestrator | 2026-02-02 03:03:04.830596 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-02 03:03:04.830609 | orchestrator | Monday 02 February 2026 03:02:55 +0000 (0:00:00.565) 0:00:01.638 ******* 2026-02-02 03:03:04.830622 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:03:04.830635 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:03:04.830648 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:03:04.830661 | orchestrator | 2026-02-02 03:03:04.830673 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-02 03:03:04.830686 | orchestrator | Monday 02 February 2026 03:02:56 +0000 (0:00:00.598) 0:00:02.237 ******* 2026-02-02 03:03:04.830699 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:03:04.830711 | orchestrator | 2026-02-02 03:03:04.830725 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-02 03:03:04.830738 | orchestrator | Monday 02 February 2026 03:02:56 +0000 (0:00:00.780) 0:00:03.017 ******* 2026-02-02 03:03:04.830752 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:03:04.830764 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:03:04.830777 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:03:04.830789 | orchestrator | 2026-02-02 03:03:04.830802 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-02 03:03:04.830815 | orchestrator | Monday 02 February 2026 03:02:57 +0000 (0:00:00.658) 0:00:03.676 ******* 2026-02-02 03:03:04.830828 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-02 03:03:04.830841 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-02 03:03:04.830854 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-02 03:03:04.830868 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-02 03:03:04.830881 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-02 03:03:04.830894 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-02 03:03:04.830907 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-02 03:03:04.830922 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-02 03:03:04.830934 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-02 03:03:04.830947 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-02 03:03:04.830974 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-02 03:03:04.830987 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-02 03:03:04.831000 | orchestrator | 2026-02-02 03:03:04.831013 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-02 03:03:04.831026 | orchestrator | Monday 02 February 2026 03:03:00 +0000 (0:00:03.038) 0:00:06.714 ******* 2026-02-02 03:03:04.831040 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-02 03:03:04.831054 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-02 03:03:04.831068 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-02 03:03:04.831081 | orchestrator | 2026-02-02 03:03:04.831095 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-02 03:03:04.831108 | orchestrator | Monday 02 February 2026 03:03:01 +0000 (0:00:00.766) 0:00:07.481 ******* 2026-02-02 03:03:04.831121 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-02 03:03:04.831135 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-02 03:03:04.831149 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-02 03:03:04.831161 | orchestrator | 2026-02-02 03:03:04.831175 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-02 03:03:04.831188 | orchestrator | Monday 02 February 2026 03:03:02 +0000 (0:00:01.212) 0:00:08.694 ******* 2026-02-02 03:03:04.831202 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-02 03:03:04.831215 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:04.831252 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-02 03:03:04.831266 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:04.831279 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-02 03:03:04.831292 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:04.831305 | orchestrator | 2026-02-02 03:03:04.831317 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-02 03:03:04.831330 | orchestrator | Monday 02 February 2026 03:03:03 +0000 (0:00:00.514) 0:00:09.209 ******* 2026-02-02 03:03:04.831356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:04.831379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:04.831394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:04.831418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:04.831433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:04.831456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:09.903650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 03:03:09.903751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 03:03:09.903762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 03:03:09.903769 | orchestrator | 2026-02-02 03:03:09.903776 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-02 03:03:09.903784 | orchestrator | Monday 02 February 2026 03:03:04 +0000 (0:00:01.733) 0:00:10.943 ******* 2026-02-02 03:03:09.903790 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:03:09.903827 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:03:09.903833 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:03:09.903840 | orchestrator | 2026-02-02 03:03:09.903847 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-02 03:03:09.903853 | orchestrator | Monday 02 February 2026 03:03:05 +0000 (0:00:00.852) 0:00:11.795 ******* 2026-02-02 03:03:09.903860 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-02 03:03:09.903867 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-02 03:03:09.903873 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-02 03:03:09.903879 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-02 03:03:09.903885 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-02 03:03:09.903892 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-02 03:03:09.903898 | orchestrator | 2026-02-02 03:03:09.903903 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-02 03:03:09.903910 | orchestrator | Monday 02 February 2026 03:03:07 +0000 (0:00:01.410) 0:00:13.206 ******* 2026-02-02 03:03:09.903916 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:03:09.903922 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:03:09.903929 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:03:09.903935 | orchestrator | 2026-02-02 03:03:09.903941 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-02 03:03:09.903947 | orchestrator | Monday 02 February 2026 03:03:07 +0000 (0:00:00.876) 0:00:14.082 ******* 2026-02-02 03:03:09.903954 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:03:09.903960 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:03:09.903966 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:03:09.903972 | orchestrator | 2026-02-02 03:03:09.903978 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-02 03:03:09.903985 | orchestrator | Monday 02 February 2026 03:03:09 +0000 (0:00:01.347) 0:00:15.430 ******* 2026-02-02 03:03:09.903992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 03:03:09.904018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:09.904026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:09.904035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6a0fc89b6ae978c169fda4c08cf90b46a51ea668', '__omit_place_holder__6a0fc89b6ae978c169fda4c08cf90b46a51ea668'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 03:03:09.904049 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:09.904057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 03:03:09.904097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:09.904106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:09.904113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6a0fc89b6ae978c169fda4c08cf90b46a51ea668', '__omit_place_holder__6a0fc89b6ae978c169fda4c08cf90b46a51ea668'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 03:03:09.904120 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:09.904133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 03:03:12.740631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:12.740757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:12.740776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6a0fc89b6ae978c169fda4c08cf90b46a51ea668', '__omit_place_holder__6a0fc89b6ae978c169fda4c08cf90b46a51ea668'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 03:03:12.740789 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:12.740801 | orchestrator | 2026-02-02 03:03:12.740808 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-02 03:03:12.740816 | orchestrator | Monday 02 February 2026 03:03:09 +0000 (0:00:00.584) 0:00:16.014 ******* 2026-02-02 03:03:12.740823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:12.740831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:12.740837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:12.740877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:12.740885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:12.740892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6a0fc89b6ae978c169fda4c08cf90b46a51ea668', '__omit_place_holder__6a0fc89b6ae978c169fda4c08cf90b46a51ea668'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 03:03:12.740899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:12.740905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:12.740912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6a0fc89b6ae978c169fda4c08cf90b46a51ea668', '__omit_place_holder__6a0fc89b6ae978c169fda4c08cf90b46a51ea668'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 03:03:12.740941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:21.371444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:21.371634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6a0fc89b6ae978c169fda4c08cf90b46a51ea668', '__omit_place_holder__6a0fc89b6ae978c169fda4c08cf90b46a51ea668'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 03:03:21.371651 | orchestrator | 2026-02-02 03:03:21.371663 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-02 03:03:21.371675 | orchestrator | Monday 02 February 2026 03:03:12 +0000 (0:00:02.833) 0:00:18.847 ******* 2026-02-02 03:03:21.371685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:21.371697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:21.371708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:21.371745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:21.371804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:21.371826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:21.371844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 03:03:21.371861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 03:03:21.371881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 03:03:21.371899 | orchestrator | 2026-02-02 03:03:21.371915 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-02 03:03:21.371930 | orchestrator | Monday 02 February 2026 03:03:15 +0000 (0:00:02.980) 0:00:21.828 ******* 2026-02-02 03:03:21.371952 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-02 03:03:21.371971 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-02 03:03:21.371987 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-02 03:03:21.372003 | orchestrator | 2026-02-02 03:03:21.372019 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-02 03:03:21.372036 | orchestrator | Monday 02 February 2026 03:03:17 +0000 (0:00:02.276) 0:00:24.104 ******* 2026-02-02 03:03:21.372053 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-02 03:03:21.372070 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-02 03:03:21.372085 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-02 03:03:21.372101 | orchestrator | 2026-02-02 03:03:21.372118 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-02 03:03:21.372135 | orchestrator | Monday 02 February 2026 03:03:20 +0000 (0:00:02.766) 0:00:26.871 ******* 2026-02-02 03:03:21.372152 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:21.372169 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:21.372185 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:21.372202 | orchestrator | 2026-02-02 03:03:21.372231 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-02 03:03:32.845170 | orchestrator | Monday 02 February 2026 03:03:21 +0000 (0:00:00.614) 0:00:27.485 ******* 2026-02-02 03:03:32.845265 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-02 03:03:32.845293 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-02 03:03:32.845308 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-02 03:03:32.845323 | orchestrator | 2026-02-02 03:03:32.845339 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-02 03:03:32.845354 | orchestrator | Monday 02 February 2026 03:03:23 +0000 (0:00:02.089) 0:00:29.575 ******* 2026-02-02 03:03:32.845370 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-02 03:03:32.845386 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-02 03:03:32.845400 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-02 03:03:32.845414 | orchestrator | 2026-02-02 03:03:32.845427 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-02 03:03:32.845497 | orchestrator | Monday 02 February 2026 03:03:25 +0000 (0:00:02.132) 0:00:31.707 ******* 2026-02-02 03:03:32.845512 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-02 03:03:32.845526 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-02 03:03:32.845541 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-02 03:03:32.845556 | orchestrator | 2026-02-02 03:03:32.845586 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-02 03:03:32.845602 | orchestrator | Monday 02 February 2026 03:03:26 +0000 (0:00:01.390) 0:00:33.098 ******* 2026-02-02 03:03:32.845618 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-02 03:03:32.845634 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-02 03:03:32.845648 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-02 03:03:32.845663 | orchestrator | 2026-02-02 03:03:32.845703 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-02 03:03:32.845713 | orchestrator | Monday 02 February 2026 03:03:28 +0000 (0:00:01.468) 0:00:34.566 ******* 2026-02-02 03:03:32.845724 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:03:32.845734 | orchestrator | 2026-02-02 03:03:32.845745 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-02 03:03:32.845756 | orchestrator | Monday 02 February 2026 03:03:28 +0000 (0:00:00.542) 0:00:35.109 ******* 2026-02-02 03:03:32.845769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:32.845781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:32.845799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:32.845835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:32.845850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:32.845864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:32.845885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 03:03:32.845898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 03:03:32.845912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 03:03:32.845925 | orchestrator | 2026-02-02 03:03:32.845939 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-02 03:03:32.845952 | orchestrator | Monday 02 February 2026 03:03:32 +0000 (0:00:03.284) 0:00:38.394 ******* 2026-02-02 03:03:32.845978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 03:03:33.662297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:33.662382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:33.662413 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:33.662424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 03:03:33.662497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:33.662506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:33.662513 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:33.662521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 03:03:33.662565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:33.662575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:33.662589 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:33.662596 | orchestrator | 2026-02-02 03:03:33.662605 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-02 03:03:33.662614 | orchestrator | Monday 02 February 2026 03:03:32 +0000 (0:00:00.568) 0:00:38.962 ******* 2026-02-02 03:03:33.662623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 03:03:33.662630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:33.662638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:33.662646 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:33.662654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 03:03:33.662670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:34.715070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:34.715195 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:34.715214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 03:03:34.715228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:34.715240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:34.715251 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:34.715263 | orchestrator | 2026-02-02 03:03:34.715275 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-02 03:03:34.715288 | orchestrator | Monday 02 February 2026 03:03:33 +0000 (0:00:00.813) 0:00:39.775 ******* 2026-02-02 03:03:34.715300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 03:03:34.715312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:34.715341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:34.715360 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:34.715405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 03:03:34.715418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:34.715458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:34.715471 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:34.715483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 03:03:34.715510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:34.715527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:34.715556 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:36.070503 | orchestrator | 2026-02-02 03:03:36.070606 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-02 03:03:36.070625 | orchestrator | Monday 02 February 2026 03:03:34 +0000 (0:00:01.051) 0:00:40.827 ******* 2026-02-02 03:03:36.070640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 03:03:36.070657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:36.070669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:36.070681 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:36.070694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 03:03:36.070706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:36.070733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:36.070769 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:36.070799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 03:03:36.070812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:36.070824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:36.070835 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:36.070846 | orchestrator | 2026-02-02 03:03:36.070857 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-02 03:03:36.070869 | orchestrator | Monday 02 February 2026 03:03:35 +0000 (0:00:00.573) 0:00:41.400 ******* 2026-02-02 03:03:36.070880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 03:03:36.070892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:36.070929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:36.070942 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:36.070963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 03:03:37.091146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:37.091242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:37.091263 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:37.091280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 03:03:37.091294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:37.091307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:37.091364 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:37.091390 | orchestrator | 2026-02-02 03:03:37.091404 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-02 03:03:37.091418 | orchestrator | Monday 02 February 2026 03:03:36 +0000 (0:00:00.784) 0:00:42.184 ******* 2026-02-02 03:03:37.091513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 03:03:37.091554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:37.091570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:37.091584 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:37.091598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 03:03:37.091612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:37.091636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:37.091645 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:37.091658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 03:03:37.091673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:38.464101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:38.464211 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:38.464232 | orchestrator | 2026-02-02 03:03:38.464250 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-02 03:03:38.464268 | orchestrator | Monday 02 February 2026 03:03:37 +0000 (0:00:01.020) 0:00:43.205 ******* 2026-02-02 03:03:38.464285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 03:03:38.464301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:38.464344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:38.464356 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:38.464366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 03:03:38.464398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:38.464509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:38.464530 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:38.464546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 03:03:38.464562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:38.464588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:38.464599 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:38.464610 | orchestrator | 2026-02-02 03:03:38.464622 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-02 03:03:38.464632 | orchestrator | Monday 02 February 2026 03:03:37 +0000 (0:00:00.587) 0:00:43.793 ******* 2026-02-02 03:03:38.464643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 03:03:38.464654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:38.464681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:44.575875 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:44.575989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 03:03:44.576011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:44.576049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:44.576062 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:44.576074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 03:03:44.576101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 03:03:44.576113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 03:03:44.576125 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:44.576137 | orchestrator | 2026-02-02 03:03:44.576149 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-02 03:03:44.576163 | orchestrator | Monday 02 February 2026 03:03:38 +0000 (0:00:00.790) 0:00:44.583 ******* 2026-02-02 03:03:44.576174 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-02 03:03:44.576202 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-02 03:03:44.576215 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-02 03:03:44.576226 | orchestrator | 2026-02-02 03:03:44.576237 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-02 03:03:44.576249 | orchestrator | Monday 02 February 2026 03:03:39 +0000 (0:00:01.404) 0:00:45.988 ******* 2026-02-02 03:03:44.576261 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-02 03:03:44.576272 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-02 03:03:44.576283 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-02 03:03:44.576294 | orchestrator | 2026-02-02 03:03:44.576313 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-02 03:03:44.576324 | orchestrator | Monday 02 February 2026 03:03:41 +0000 (0:00:01.672) 0:00:47.660 ******* 2026-02-02 03:03:44.576335 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 03:03:44.576346 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 03:03:44.576357 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 03:03:44.576482 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:44.576508 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 03:03:44.576529 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 03:03:44.576548 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:44.576568 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 03:03:44.576587 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:44.576606 | orchestrator | 2026-02-02 03:03:44.576626 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-02 03:03:44.576646 | orchestrator | Monday 02 February 2026 03:03:42 +0000 (0:00:00.898) 0:00:48.558 ******* 2026-02-02 03:03:44.576666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:44.576689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:44.576720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 03:03:44.576757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:48.919275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:48.919362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 03:03:48.919374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 03:03:48.919382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 03:03:48.919388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 03:03:48.919436 | orchestrator | 2026-02-02 03:03:48.919459 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-02 03:03:48.919468 | orchestrator | Monday 02 February 2026 03:03:44 +0000 (0:00:02.133) 0:00:50.692 ******* 2026-02-02 03:03:48.919476 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:03:48.919483 | orchestrator | 2026-02-02 03:03:48.919490 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-02 03:03:48.919497 | orchestrator | Monday 02 February 2026 03:03:45 +0000 (0:00:00.850) 0:00:51.542 ******* 2026-02-02 03:03:48.919522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 03:03:48.919546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 03:03:48.919554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 03:03:48.919561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 03:03:48.919568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 03:03:48.919578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 03:03:48.919585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 03:03:48.919602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 03:03:49.552939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 03:03:49.553021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 03:03:49.553031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 03:03:49.553052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 03:03:49.553058 | orchestrator | 2026-02-02 03:03:49.553065 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-02 03:03:49.553072 | orchestrator | Monday 02 February 2026 03:03:48 +0000 (0:00:03.484) 0:00:55.027 ******* 2026-02-02 03:03:49.553079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-02 03:03:49.553114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 03:03:49.553121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 03:03:49.553127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 03:03:49.553133 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:49.553140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-02 03:03:49.553150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 03:03:49.553161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 03:03:49.553167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 03:03:49.553172 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:49.553184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-02 03:03:58.093205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 03:03:58.093269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 03:03:58.093277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 03:03:58.093301 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:58.093309 | orchestrator | 2026-02-02 03:03:58.093316 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-02 03:03:58.093324 | orchestrator | Monday 02 February 2026 03:03:49 +0000 (0:00:00.645) 0:00:55.672 ******* 2026-02-02 03:03:58.093331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-02 03:03:58.093339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-02 03:03:58.093346 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:58.093362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-02 03:03:58.093369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-02 03:03:58.093409 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:58.093416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-02 03:03:58.093423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-02 03:03:58.093429 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:03:58.093436 | orchestrator | 2026-02-02 03:03:58.093442 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-02 03:03:58.093448 | orchestrator | Monday 02 February 2026 03:03:50 +0000 (0:00:01.330) 0:00:57.002 ******* 2026-02-02 03:03:58.093454 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:03:58.093461 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:03:58.093467 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:03:58.093473 | orchestrator | 2026-02-02 03:03:58.093480 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-02 03:03:58.093487 | orchestrator | Monday 02 February 2026 03:03:52 +0000 (0:00:01.301) 0:00:58.304 ******* 2026-02-02 03:03:58.093493 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:03:58.093500 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:03:58.093506 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:03:58.093512 | orchestrator | 2026-02-02 03:03:58.093519 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-02 03:03:58.093525 | orchestrator | Monday 02 February 2026 03:03:54 +0000 (0:00:02.104) 0:01:00.408 ******* 2026-02-02 03:03:58.093531 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:03:58.093538 | orchestrator | 2026-02-02 03:03:58.093555 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-02 03:03:58.093562 | orchestrator | Monday 02 February 2026 03:03:54 +0000 (0:00:00.647) 0:01:01.056 ******* 2026-02-02 03:03:58.093570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 03:03:58.093587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 03:03:58.093594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:03:58.093601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 03:03:58.093608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 03:03:58.093619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:03:58.619261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 03:03:58.619441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 03:03:58.619474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:03:58.619494 | orchestrator | 2026-02-02 03:03:58.619506 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-02 03:03:58.619517 | orchestrator | Monday 02 February 2026 03:03:58 +0000 (0:00:03.149) 0:01:04.206 ******* 2026-02-02 03:03:58.619529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-02 03:03:58.619551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 03:03:58.619598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:03:58.619609 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:03:58.619626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-02 03:03:58.619637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 03:03:58.619647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:03:58.619657 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:03:58.619668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-02 03:03:58.619691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 03:04:07.868183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:04:07.868271 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:07.868282 | orchestrator | 2026-02-02 03:04:07.868288 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-02 03:04:07.868297 | orchestrator | Monday 02 February 2026 03:03:58 +0000 (0:00:00.523) 0:01:04.730 ******* 2026-02-02 03:04:07.868318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-02 03:04:07.868326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-02 03:04:07.868334 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:07.868339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-02 03:04:07.868345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-02 03:04:07.868351 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:07.868411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-02 03:04:07.868421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-02 03:04:07.868430 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:07.868440 | orchestrator | 2026-02-02 03:04:07.868446 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-02 03:04:07.868452 | orchestrator | Monday 02 February 2026 03:03:59 +0000 (0:00:00.727) 0:01:05.457 ******* 2026-02-02 03:04:07.868457 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:04:07.868464 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:04:07.868469 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:04:07.868475 | orchestrator | 2026-02-02 03:04:07.868480 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-02 03:04:07.868487 | orchestrator | Monday 02 February 2026 03:04:00 +0000 (0:00:01.426) 0:01:06.883 ******* 2026-02-02 03:04:07.868515 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:04:07.868521 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:04:07.868527 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:04:07.868532 | orchestrator | 2026-02-02 03:04:07.868537 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-02 03:04:07.868543 | orchestrator | Monday 02 February 2026 03:04:02 +0000 (0:00:01.932) 0:01:08.816 ******* 2026-02-02 03:04:07.868548 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:07.868554 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:07.868559 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:07.868565 | orchestrator | 2026-02-02 03:04:07.868570 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-02 03:04:07.868576 | orchestrator | Monday 02 February 2026 03:04:03 +0000 (0:00:00.335) 0:01:09.152 ******* 2026-02-02 03:04:07.868581 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:04:07.868587 | orchestrator | 2026-02-02 03:04:07.868592 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-02 03:04:07.868598 | orchestrator | Monday 02 February 2026 03:04:03 +0000 (0:00:00.683) 0:01:09.836 ******* 2026-02-02 03:04:07.868618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-02 03:04:07.868630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-02 03:04:07.868636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-02 03:04:07.868642 | orchestrator | 2026-02-02 03:04:07.868647 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-02 03:04:07.868654 | orchestrator | Monday 02 February 2026 03:04:06 +0000 (0:00:02.780) 0:01:12.617 ******* 2026-02-02 03:04:07.868665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-02 03:04:07.868670 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:07.868676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-02 03:04:07.868682 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:07.868692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-02 03:04:15.632013 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:15.632117 | orchestrator | 2026-02-02 03:04:15.632131 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-02 03:04:15.632143 | orchestrator | Monday 02 February 2026 03:04:07 +0000 (0:00:01.363) 0:01:13.981 ******* 2026-02-02 03:04:15.632174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-02 03:04:15.632187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-02 03:04:15.632198 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:15.632207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-02 03:04:15.632239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-02 03:04:15.632247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-02 03:04:15.632255 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:15.632263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-02 03:04:15.632272 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:15.632280 | orchestrator | 2026-02-02 03:04:15.632288 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-02 03:04:15.632297 | orchestrator | Monday 02 February 2026 03:04:09 +0000 (0:00:01.669) 0:01:15.650 ******* 2026-02-02 03:04:15.632305 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:15.632313 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:15.632320 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:15.632329 | orchestrator | 2026-02-02 03:04:15.632416 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-02 03:04:15.632429 | orchestrator | Monday 02 February 2026 03:04:09 +0000 (0:00:00.467) 0:01:16.117 ******* 2026-02-02 03:04:15.632438 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:15.632446 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:15.632455 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:15.632463 | orchestrator | 2026-02-02 03:04:15.632471 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-02 03:04:15.632479 | orchestrator | Monday 02 February 2026 03:04:11 +0000 (0:00:01.296) 0:01:17.414 ******* 2026-02-02 03:04:15.632487 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:04:15.632496 | orchestrator | 2026-02-02 03:04:15.632505 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-02 03:04:15.632514 | orchestrator | Monday 02 February 2026 03:04:12 +0000 (0:00:00.970) 0:01:18.384 ******* 2026-02-02 03:04:15.632543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 03:04:15.632563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 03:04:15.632570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:04:15.632577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:04:15.632584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 03:04:15.632596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 03:04:16.405197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 03:04:16.405324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 03:04:16.405392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 03:04:16.405405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:04:16.405416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 03:04:16.405454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 03:04:16.405473 | orchestrator | 2026-02-02 03:04:16.405486 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-02 03:04:16.405498 | orchestrator | Monday 02 February 2026 03:04:15 +0000 (0:00:03.458) 0:01:21.843 ******* 2026-02-02 03:04:16.405510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-02 03:04:16.405521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:04:16.405531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 03:04:16.405541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 03:04:16.405552 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:16.405576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-02 03:04:22.965930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:04:22.966108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 03:04:22.966134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 03:04:22.966151 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:22.966169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-02 03:04:22.966185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:04:22.966266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 03:04:22.966283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 03:04:22.966299 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:22.966313 | orchestrator | 2026-02-02 03:04:22.966349 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-02 03:04:22.966365 | orchestrator | Monday 02 February 2026 03:04:16 +0000 (0:00:00.812) 0:01:22.655 ******* 2026-02-02 03:04:22.966378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-02 03:04:22.966394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-02 03:04:22.966410 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:22.966424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-02 03:04:22.966438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-02 03:04:22.966453 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:22.966468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-02 03:04:22.966483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-02 03:04:22.966498 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:22.966512 | orchestrator | 2026-02-02 03:04:22.966527 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-02 03:04:22.966542 | orchestrator | Monday 02 February 2026 03:04:17 +0000 (0:00:01.443) 0:01:24.099 ******* 2026-02-02 03:04:22.966557 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:04:22.966581 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:04:22.966595 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:04:22.966609 | orchestrator | 2026-02-02 03:04:22.966623 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-02 03:04:22.966637 | orchestrator | Monday 02 February 2026 03:04:19 +0000 (0:00:01.307) 0:01:25.406 ******* 2026-02-02 03:04:22.966651 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:04:22.966666 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:04:22.966680 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:04:22.966693 | orchestrator | 2026-02-02 03:04:22.966707 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-02 03:04:22.966722 | orchestrator | Monday 02 February 2026 03:04:21 +0000 (0:00:01.964) 0:01:27.371 ******* 2026-02-02 03:04:22.966736 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:22.966749 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:22.966763 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:22.966777 | orchestrator | 2026-02-02 03:04:22.966791 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-02 03:04:22.966805 | orchestrator | Monday 02 February 2026 03:04:21 +0000 (0:00:00.318) 0:01:27.689 ******* 2026-02-02 03:04:22.966819 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:22.966833 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:22.966847 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:22.966861 | orchestrator | 2026-02-02 03:04:22.966875 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-02 03:04:22.966889 | orchestrator | Monday 02 February 2026 03:04:21 +0000 (0:00:00.313) 0:01:28.003 ******* 2026-02-02 03:04:22.966903 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:04:22.966917 | orchestrator | 2026-02-02 03:04:22.966931 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-02 03:04:22.966953 | orchestrator | Monday 02 February 2026 03:04:22 +0000 (0:00:01.076) 0:01:29.080 ******* 2026-02-02 03:04:26.487161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 03:04:26.487232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 03:04:26.487239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 03:04:26.487257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 03:04:26.487263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 03:04:26.487286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 03:04:26.487292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 03:04:26.487299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 03:04:26.487305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 03:04:26.487361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 03:04:26.487369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 03:04:26.487385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.337178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.337300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.337394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.337447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.337460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.337471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.337518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.337531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.337542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.337560 | orchestrator | 2026-02-02 03:04:27.337574 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-02 03:04:27.337587 | orchestrator | Monday 02 February 2026 03:04:26 +0000 (0:00:03.707) 0:01:32.787 ******* 2026-02-02 03:04:27.337598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 03:04:27.337611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 03:04:27.337622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.337640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.789728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.789809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 03:04:27.789836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.789845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 03:04:27.789853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.789861 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:27.790280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.790331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.790341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.790359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.790370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 03:04:27.790378 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:27.790387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 03:04:27.790395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 03:04:27.790409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 03:04:37.715171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 03:04:37.715317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 03:04:37.715355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:04:37.715368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 03:04:37.715379 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:37.715392 | orchestrator | 2026-02-02 03:04:37.715403 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-02 03:04:37.715415 | orchestrator | Monday 02 February 2026 03:04:27 +0000 (0:00:01.119) 0:01:33.907 ******* 2026-02-02 03:04:37.715425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-02 03:04:37.715437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-02 03:04:37.715448 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:37.715458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-02 03:04:37.715468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-02 03:04:37.715478 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:37.715487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-02 03:04:37.715519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-02 03:04:37.715530 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:37.715547 | orchestrator | 2026-02-02 03:04:37.715563 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-02 03:04:37.715601 | orchestrator | Monday 02 February 2026 03:04:29 +0000 (0:00:01.327) 0:01:35.234 ******* 2026-02-02 03:04:37.715619 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:04:37.715635 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:04:37.715649 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:04:37.715666 | orchestrator | 2026-02-02 03:04:37.715683 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-02 03:04:37.715701 | orchestrator | Monday 02 February 2026 03:04:30 +0000 (0:00:01.254) 0:01:36.489 ******* 2026-02-02 03:04:37.715719 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:04:37.715736 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:04:37.715752 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:04:37.715764 | orchestrator | 2026-02-02 03:04:37.715777 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-02 03:04:37.715789 | orchestrator | Monday 02 February 2026 03:04:32 +0000 (0:00:02.033) 0:01:38.522 ******* 2026-02-02 03:04:37.715800 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:37.715813 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:37.715830 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:37.715846 | orchestrator | 2026-02-02 03:04:37.715862 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-02 03:04:37.715878 | orchestrator | Monday 02 February 2026 03:04:32 +0000 (0:00:00.332) 0:01:38.855 ******* 2026-02-02 03:04:37.715893 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:04:37.715908 | orchestrator | 2026-02-02 03:04:37.715923 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-02 03:04:37.715940 | orchestrator | Monday 02 February 2026 03:04:33 +0000 (0:00:01.036) 0:01:39.891 ******* 2026-02-02 03:04:37.715967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 03:04:37.716000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 03:04:40.781040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 03:04:40.781137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 03:04:40.781192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 03:04:40.781201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 03:04:40.781215 | orchestrator | 2026-02-02 03:04:40.781223 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-02 03:04:40.781232 | orchestrator | Monday 02 February 2026 03:04:37 +0000 (0:00:04.057) 0:01:43.949 ******* 2026-02-02 03:04:40.781251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 03:04:40.880649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 03:04:40.880753 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:40.880765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 03:04:40.880795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 03:04:40.880808 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:40.880814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 03:04:40.880827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 03:04:52.989644 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:52.989745 | orchestrator | 2026-02-02 03:04:52.989756 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-02 03:04:52.989765 | orchestrator | Monday 02 February 2026 03:04:40 +0000 (0:00:03.049) 0:01:46.999 ******* 2026-02-02 03:04:52.989774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 03:04:52.989785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 03:04:52.989793 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:52.989800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 03:04:52.989807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 03:04:52.989814 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:52.989821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 03:04:52.989843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 03:04:52.989850 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:52.989858 | orchestrator | 2026-02-02 03:04:52.989865 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-02 03:04:52.989872 | orchestrator | Monday 02 February 2026 03:04:44 +0000 (0:00:03.835) 0:01:50.834 ******* 2026-02-02 03:04:52.989895 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:04:52.989903 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:04:52.989909 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:04:52.989916 | orchestrator | 2026-02-02 03:04:52.989923 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-02 03:04:52.989929 | orchestrator | Monday 02 February 2026 03:04:46 +0000 (0:00:01.353) 0:01:52.187 ******* 2026-02-02 03:04:52.989936 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:04:52.989943 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:04:52.989949 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:04:52.989956 | orchestrator | 2026-02-02 03:04:52.989963 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-02 03:04:52.989982 | orchestrator | Monday 02 February 2026 03:04:48 +0000 (0:00:02.077) 0:01:54.265 ******* 2026-02-02 03:04:52.989989 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:52.989996 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:04:52.990002 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:04:52.990009 | orchestrator | 2026-02-02 03:04:52.990062 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-02 03:04:52.990072 | orchestrator | Monday 02 February 2026 03:04:48 +0000 (0:00:00.301) 0:01:54.566 ******* 2026-02-02 03:04:52.990080 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:04:52.990111 | orchestrator | 2026-02-02 03:04:52.990120 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-02 03:04:52.990127 | orchestrator | Monday 02 February 2026 03:04:49 +0000 (0:00:01.094) 0:01:55.661 ******* 2026-02-02 03:04:52.990135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 03:04:52.990146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 03:04:52.990154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 03:04:52.990162 | orchestrator | 2026-02-02 03:04:52.990169 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-02 03:04:52.990186 | orchestrator | Monday 02 February 2026 03:04:52 +0000 (0:00:03.039) 0:01:58.701 ******* 2026-02-02 03:04:52.990195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-02 03:04:52.990205 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:04:52.990221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-02 03:05:01.973904 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:01.974106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-02 03:05:01.974262 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:01.974293 | orchestrator | 2026-02-02 03:05:01.974308 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-02 03:05:01.974319 | orchestrator | Monday 02 February 2026 03:04:52 +0000 (0:00:00.405) 0:01:59.106 ******* 2026-02-02 03:05:01.974328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-02 03:05:01.974338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-02 03:05:01.974348 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:01.974356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-02 03:05:01.974365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-02 03:05:01.974379 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:01.974399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-02 03:05:01.974414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-02 03:05:01.974451 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:01.974464 | orchestrator | 2026-02-02 03:05:01.974477 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-02 03:05:01.974490 | orchestrator | Monday 02 February 2026 03:04:53 +0000 (0:00:00.872) 0:01:59.978 ******* 2026-02-02 03:05:01.974503 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:05:01.974515 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:05:01.974529 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:05:01.974542 | orchestrator | 2026-02-02 03:05:01.974556 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-02 03:05:01.974570 | orchestrator | Monday 02 February 2026 03:04:55 +0000 (0:00:01.314) 0:02:01.293 ******* 2026-02-02 03:05:01.974584 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:05:01.974597 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:05:01.974611 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:05:01.974624 | orchestrator | 2026-02-02 03:05:01.974638 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-02 03:05:01.974660 | orchestrator | Monday 02 February 2026 03:04:57 +0000 (0:00:02.046) 0:02:03.339 ******* 2026-02-02 03:05:01.974675 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:01.974689 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:01.974703 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:01.974712 | orchestrator | 2026-02-02 03:05:01.974720 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-02 03:05:01.974728 | orchestrator | Monday 02 February 2026 03:04:57 +0000 (0:00:00.314) 0:02:03.653 ******* 2026-02-02 03:05:01.974736 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:05:01.974744 | orchestrator | 2026-02-02 03:05:01.974752 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-02 03:05:01.974760 | orchestrator | Monday 02 February 2026 03:04:58 +0000 (0:00:01.192) 0:02:04.845 ******* 2026-02-02 03:05:01.974792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 03:05:01.974818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 03:05:01.974837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 03:05:03.663517 | orchestrator | 2026-02-02 03:05:03.663608 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-02 03:05:03.663619 | orchestrator | Monday 02 February 2026 03:05:01 +0000 (0:00:03.244) 0:02:08.090 ******* 2026-02-02 03:05:03.663646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 03:05:03.663671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 03:05:03.663695 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:03.663703 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:03.663714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 03:05:03.663722 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:03.663728 | orchestrator | 2026-02-02 03:05:03.663734 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-02 03:05:03.663741 | orchestrator | Monday 02 February 2026 03:05:02 +0000 (0:00:00.680) 0:02:08.771 ******* 2026-02-02 03:05:03.663749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-02 03:05:03.663763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 03:05:03.663773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-02 03:05:03.663784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 03:05:12.750454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-02 03:05:12.750537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-02 03:05:12.750547 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:12.750555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 03:05:12.750575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-02 03:05:12.750582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 03:05:12.750588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-02 03:05:12.750593 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:12.750598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-02 03:05:12.750603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 03:05:12.750608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-02 03:05:12.750628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 03:05:12.750633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-02 03:05:12.750638 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:12.750643 | orchestrator | 2026-02-02 03:05:12.750649 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-02 03:05:12.750656 | orchestrator | Monday 02 February 2026 03:05:03 +0000 (0:00:01.009) 0:02:09.781 ******* 2026-02-02 03:05:12.750661 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:05:12.750665 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:05:12.750670 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:05:12.750675 | orchestrator | 2026-02-02 03:05:12.750680 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-02 03:05:12.750685 | orchestrator | Monday 02 February 2026 03:05:05 +0000 (0:00:01.670) 0:02:11.452 ******* 2026-02-02 03:05:12.750690 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:05:12.750695 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:05:12.750700 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:05:12.750705 | orchestrator | 2026-02-02 03:05:12.750710 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-02 03:05:12.750715 | orchestrator | Monday 02 February 2026 03:05:07 +0000 (0:00:02.188) 0:02:13.640 ******* 2026-02-02 03:05:12.750720 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:12.750725 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:12.750739 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:12.750745 | orchestrator | 2026-02-02 03:05:12.750750 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-02 03:05:12.750755 | orchestrator | Monday 02 February 2026 03:05:07 +0000 (0:00:00.348) 0:02:13.989 ******* 2026-02-02 03:05:12.750760 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:12.750764 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:12.750769 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:12.750774 | orchestrator | 2026-02-02 03:05:12.750779 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-02 03:05:12.750784 | orchestrator | Monday 02 February 2026 03:05:08 +0000 (0:00:00.339) 0:02:14.329 ******* 2026-02-02 03:05:12.750789 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:05:12.750794 | orchestrator | 2026-02-02 03:05:12.750799 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-02 03:05:12.750803 | orchestrator | Monday 02 February 2026 03:05:09 +0000 (0:00:01.216) 0:02:15.546 ******* 2026-02-02 03:05:12.750815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:05:12.750827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:05:12.750834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:05:12.750839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:05:12.750849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:05:13.377366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:05:13.377468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:05:13.377504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:05:13.377516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:05:13.377527 | orchestrator | 2026-02-02 03:05:13.377538 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-02 03:05:13.377550 | orchestrator | Monday 02 February 2026 03:05:12 +0000 (0:00:03.322) 0:02:18.868 ******* 2026-02-02 03:05:13.377579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-02 03:05:13.377597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:05:13.377608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:05:13.377626 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:13.377638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-02 03:05:13.377649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:05:13.377659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:05:13.377669 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:13.377691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-02 03:05:22.766106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:05:22.766270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:05:22.766291 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:22.766302 | orchestrator | 2026-02-02 03:05:22.766311 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-02 03:05:22.766321 | orchestrator | Monday 02 February 2026 03:05:13 +0000 (0:00:00.624) 0:02:19.492 ******* 2026-02-02 03:05:22.766331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-02 03:05:22.766341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-02 03:05:22.766351 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:22.766359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-02 03:05:22.766367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-02 03:05:22.766375 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:22.766383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-02 03:05:22.766390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-02 03:05:22.766398 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:22.766406 | orchestrator | 2026-02-02 03:05:22.766414 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-02 03:05:22.766422 | orchestrator | Monday 02 February 2026 03:05:14 +0000 (0:00:01.098) 0:02:20.590 ******* 2026-02-02 03:05:22.766429 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:05:22.766437 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:05:22.766471 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:05:22.766479 | orchestrator | 2026-02-02 03:05:22.766487 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-02 03:05:22.766495 | orchestrator | Monday 02 February 2026 03:05:15 +0000 (0:00:01.354) 0:02:21.945 ******* 2026-02-02 03:05:22.766503 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:05:22.766510 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:05:22.766518 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:05:22.766525 | orchestrator | 2026-02-02 03:05:22.766533 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-02 03:05:22.766541 | orchestrator | Monday 02 February 2026 03:05:17 +0000 (0:00:02.147) 0:02:24.092 ******* 2026-02-02 03:05:22.766549 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:22.766570 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:22.766579 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:22.766586 | orchestrator | 2026-02-02 03:05:22.766594 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-02 03:05:22.766616 | orchestrator | Monday 02 February 2026 03:05:18 +0000 (0:00:00.323) 0:02:24.416 ******* 2026-02-02 03:05:22.766621 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:05:22.766626 | orchestrator | 2026-02-02 03:05:22.766631 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-02 03:05:22.766636 | orchestrator | Monday 02 February 2026 03:05:19 +0000 (0:00:01.270) 0:02:25.687 ******* 2026-02-02 03:05:22.766642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 03:05:22.766650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:05:22.766656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 03:05:22.766667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:05:22.766679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 03:05:28.125818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:05:28.125961 | orchestrator | 2026-02-02 03:05:28.125989 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-02 03:05:28.126009 | orchestrator | Monday 02 February 2026 03:05:22 +0000 (0:00:03.189) 0:02:28.876 ******* 2026-02-02 03:05:28.126105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-02 03:05:28.126179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:05:28.126250 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:28.126276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-02 03:05:28.126319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:05:28.126339 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:28.126359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-02 03:05:28.126374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:05:28.126398 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:28.126414 | orchestrator | 2026-02-02 03:05:28.126432 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-02 03:05:28.126451 | orchestrator | Monday 02 February 2026 03:05:23 +0000 (0:00:00.738) 0:02:29.615 ******* 2026-02-02 03:05:28.126470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-02 03:05:28.126486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-02 03:05:28.126505 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:28.126523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-02 03:05:28.126540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-02 03:05:28.126557 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:28.126567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-02 03:05:28.126577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-02 03:05:28.126591 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:28.126608 | orchestrator | 2026-02-02 03:05:28.126632 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-02 03:05:28.126650 | orchestrator | Monday 02 February 2026 03:05:24 +0000 (0:00:00.920) 0:02:30.536 ******* 2026-02-02 03:05:28.126660 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:05:28.126671 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:05:28.126688 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:05:28.126706 | orchestrator | 2026-02-02 03:05:28.126722 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-02 03:05:28.126736 | orchestrator | Monday 02 February 2026 03:05:26 +0000 (0:00:01.636) 0:02:32.173 ******* 2026-02-02 03:05:28.126745 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:05:28.126755 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:05:28.126773 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:05:28.126790 | orchestrator | 2026-02-02 03:05:28.126806 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-02 03:05:28.126831 | orchestrator | Monday 02 February 2026 03:05:28 +0000 (0:00:02.066) 0:02:34.239 ******* 2026-02-02 03:05:32.649157 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:05:32.649333 | orchestrator | 2026-02-02 03:05:32.649350 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-02 03:05:32.649362 | orchestrator | Monday 02 February 2026 03:05:29 +0000 (0:00:01.054) 0:02:35.293 ******* 2026-02-02 03:05:32.649375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 03:05:32.649415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:05:32.649428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 03:05:32.649439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 03:05:32.649477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 03:05:32.649521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:05:32.649540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 03:05:32.649568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 03:05:32.649584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 03:05:32.649602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:05:32.649628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 03:05:32.649657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 03:05:33.631663 | orchestrator | 2026-02-02 03:05:33.631777 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-02 03:05:33.631793 | orchestrator | Monday 02 February 2026 03:05:32 +0000 (0:00:03.563) 0:02:38.857 ******* 2026-02-02 03:05:33.631824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-02 03:05:33.632788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:05:33.632880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 03:05:33.632897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 03:05:33.632911 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:33.632955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-02 03:05:33.633015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:05:33.633057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 03:05:33.633079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 03:05:33.633099 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:33.633116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-02 03:05:33.633132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:05:33.633159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 03:05:33.633292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 03:05:44.909978 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:44.910144 | orchestrator | 2026-02-02 03:05:44.910188 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-02 03:05:44.910203 | orchestrator | Monday 02 February 2026 03:05:33 +0000 (0:00:00.994) 0:02:39.851 ******* 2026-02-02 03:05:44.910216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-02 03:05:44.910229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-02 03:05:44.910243 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:44.910255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-02 03:05:44.910267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-02 03:05:44.910278 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:44.910290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-02 03:05:44.910301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-02 03:05:44.910312 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:44.910323 | orchestrator | 2026-02-02 03:05:44.910334 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-02 03:05:44.910345 | orchestrator | Monday 02 February 2026 03:05:34 +0000 (0:00:01.026) 0:02:40.877 ******* 2026-02-02 03:05:44.910356 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:05:44.910367 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:05:44.910378 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:05:44.910389 | orchestrator | 2026-02-02 03:05:44.910400 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-02 03:05:44.910411 | orchestrator | Monday 02 February 2026 03:05:35 +0000 (0:00:01.240) 0:02:42.118 ******* 2026-02-02 03:05:44.910422 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:05:44.910434 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:05:44.910445 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:05:44.910456 | orchestrator | 2026-02-02 03:05:44.910466 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-02 03:05:44.910478 | orchestrator | Monday 02 February 2026 03:05:38 +0000 (0:00:02.109) 0:02:44.227 ******* 2026-02-02 03:05:44.910489 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:05:44.910511 | orchestrator | 2026-02-02 03:05:44.910526 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-02 03:05:44.910539 | orchestrator | Monday 02 February 2026 03:05:39 +0000 (0:00:01.391) 0:02:45.619 ******* 2026-02-02 03:05:44.910553 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 03:05:44.910567 | orchestrator | 2026-02-02 03:05:44.910580 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-02 03:05:44.910618 | orchestrator | Monday 02 February 2026 03:05:42 +0000 (0:00:03.003) 0:02:48.622 ******* 2026-02-02 03:05:44.910670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:05:44.910690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 03:05:44.910705 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:44.910725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:05:44.910749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 03:05:44.910762 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:44.910785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:05:47.493477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 03:05:47.493602 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:47.493620 | orchestrator | 2026-02-02 03:05:47.493633 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-02 03:05:47.493646 | orchestrator | Monday 02 February 2026 03:05:44 +0000 (0:00:02.392) 0:02:51.014 ******* 2026-02-02 03:05:47.493702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:05:47.493718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 03:05:47.493730 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:47.493762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:05:47.493792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 03:05:47.493804 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:47.493817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:05:47.493836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 03:05:57.469450 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:57.469555 | orchestrator | 2026-02-02 03:05:57.469571 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-02 03:05:57.469584 | orchestrator | Monday 02 February 2026 03:05:47 +0000 (0:00:02.592) 0:02:53.607 ******* 2026-02-02 03:05:57.469596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 03:05:57.469639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 03:05:57.469677 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:57.469696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 03:05:57.469715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 03:05:57.469732 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:57.469749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 03:05:57.469766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 03:05:57.469793 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:57.469810 | orchestrator | 2026-02-02 03:05:57.469826 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-02 03:05:57.469850 | orchestrator | Monday 02 February 2026 03:05:50 +0000 (0:00:02.986) 0:02:56.594 ******* 2026-02-02 03:05:57.469867 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:05:57.469922 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:05:57.469934 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:05:57.469944 | orchestrator | 2026-02-02 03:05:57.469954 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-02 03:05:57.469966 | orchestrator | Monday 02 February 2026 03:05:52 +0000 (0:00:02.025) 0:02:58.619 ******* 2026-02-02 03:05:57.469978 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:57.469994 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:57.470087 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:57.470112 | orchestrator | 2026-02-02 03:05:57.470131 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-02 03:05:57.470176 | orchestrator | Monday 02 February 2026 03:05:54 +0000 (0:00:01.527) 0:03:00.147 ******* 2026-02-02 03:05:57.470193 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:05:57.470206 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:05:57.470218 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:05:57.470229 | orchestrator | 2026-02-02 03:05:57.470239 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-02 03:05:57.470248 | orchestrator | Monday 02 February 2026 03:05:54 +0000 (0:00:00.363) 0:03:00.510 ******* 2026-02-02 03:05:57.470258 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:05:57.470268 | orchestrator | 2026-02-02 03:05:57.470278 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-02 03:05:57.470288 | orchestrator | Monday 02 February 2026 03:05:55 +0000 (0:00:01.387) 0:03:01.898 ******* 2026-02-02 03:05:57.470308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 03:05:57.470323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 03:05:57.470334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 03:05:57.470345 | orchestrator | 2026-02-02 03:05:57.470354 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-02 03:05:57.470374 | orchestrator | Monday 02 February 2026 03:05:57 +0000 (0:00:01.487) 0:03:03.386 ******* 2026-02-02 03:05:57.470404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 03:06:05.934276 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:05.934362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 03:06:05.934372 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:05.934378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 03:06:05.934383 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:05.934387 | orchestrator | 2026-02-02 03:06:05.934393 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-02 03:06:05.934399 | orchestrator | Monday 02 February 2026 03:05:57 +0000 (0:00:00.389) 0:03:03.776 ******* 2026-02-02 03:06:05.934405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-02 03:06:05.934411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-02 03:06:05.934416 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:05.934420 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:05.934425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-02 03:06:05.934446 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:05.934450 | orchestrator | 2026-02-02 03:06:05.934483 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-02 03:06:05.934488 | orchestrator | Monday 02 February 2026 03:05:58 +0000 (0:00:00.864) 0:03:04.641 ******* 2026-02-02 03:06:05.934493 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:05.934497 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:05.934501 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:05.934506 | orchestrator | 2026-02-02 03:06:05.934510 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-02 03:06:05.934515 | orchestrator | Monday 02 February 2026 03:05:58 +0000 (0:00:00.447) 0:03:05.088 ******* 2026-02-02 03:06:05.934519 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:05.934523 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:05.934528 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:05.934532 | orchestrator | 2026-02-02 03:06:05.934536 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-02 03:06:05.934541 | orchestrator | Monday 02 February 2026 03:06:00 +0000 (0:00:01.328) 0:03:06.417 ******* 2026-02-02 03:06:05.934545 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:05.934550 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:05.934554 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:05.934558 | orchestrator | 2026-02-02 03:06:05.934563 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-02 03:06:05.934567 | orchestrator | Monday 02 February 2026 03:06:00 +0000 (0:00:00.337) 0:03:06.754 ******* 2026-02-02 03:06:05.934572 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:06:05.934576 | orchestrator | 2026-02-02 03:06:05.934581 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-02 03:06:05.934585 | orchestrator | Monday 02 February 2026 03:06:02 +0000 (0:00:01.469) 0:03:08.223 ******* 2026-02-02 03:06:05.934601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:06:05.934610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:05.934616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:05.934627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:05.934632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-02 03:06:05.934642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.008029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:06:06.008103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:06.008144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.008152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:06.008157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.008173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:06:06.008182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.008187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.008196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-02 03:06:06.008201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:06:06.008205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.008213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.090080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.090251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.090267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-02 03:06:06.090275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:06.090283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.090290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:06.090316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:06.090329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-02 03:06:06.090342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.090349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.090358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.090363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 03:06:06.090378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:06:06.288956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:06.289067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:06:06.289085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.289099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:06.289114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-02 03:06:06.289153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.289215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:06.289249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:06:06.289262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.289276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:06.289289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 03:06:06.289303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-02 03:06:06.289336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:06:07.433796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:07.433904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.433922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 03:06:07.433938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:06:07.433950 | orchestrator | 2026-02-02 03:06:07.433964 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-02 03:06:07.434073 | orchestrator | Monday 02 February 2026 03:06:06 +0000 (0:00:04.180) 0:03:12.404 ******* 2026-02-02 03:06:07.434107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:06:07.434174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.434189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.434202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.434213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-02 03:06:07.434240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.434254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:07.434276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:07.504394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.504464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:06:07.504474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:06:07.504499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.504517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.504535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-02 03:06:07.504542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.504549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:07.504554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.504567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.504576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-02 03:06:07.504587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 03:06:07.600315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.600386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:06:07.600408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:06:07.600424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:07.600431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.600436 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:07.600441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:07.600458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.600463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.600471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.600476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:06:07.600480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-02 03:06:07.600489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.813404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.813526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-02 03:06:07.813543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:07.813556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:07.813571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:07.813582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.813592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.813625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 03:06:07.813644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:06:07.813658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:06:07.813669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:07.813679 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:07.813691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-02 03:06:07.813707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-02 03:06:18.867676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 03:06:18.867850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 03:06:18.867911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:06:18.867933 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:18.867955 | orchestrator | 2026-02-02 03:06:18.867976 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-02 03:06:18.867997 | orchestrator | Monday 02 February 2026 03:06:07 +0000 (0:00:01.524) 0:03:13.929 ******* 2026-02-02 03:06:18.868018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-02 03:06:18.868039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-02 03:06:18.868059 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:18.868079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-02 03:06:18.868098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-02 03:06:18.868146 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:18.868166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-02 03:06:18.868186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-02 03:06:18.868221 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:18.868240 | orchestrator | 2026-02-02 03:06:18.868259 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-02 03:06:18.868280 | orchestrator | Monday 02 February 2026 03:06:09 +0000 (0:00:02.122) 0:03:16.051 ******* 2026-02-02 03:06:18.868299 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:06:18.868318 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:06:18.868363 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:06:18.868384 | orchestrator | 2026-02-02 03:06:18.868403 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-02 03:06:18.868423 | orchestrator | Monday 02 February 2026 03:06:11 +0000 (0:00:01.349) 0:03:17.400 ******* 2026-02-02 03:06:18.868442 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:06:18.868460 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:06:18.868479 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:06:18.868497 | orchestrator | 2026-02-02 03:06:18.868516 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-02 03:06:18.868536 | orchestrator | Monday 02 February 2026 03:06:13 +0000 (0:00:02.153) 0:03:19.554 ******* 2026-02-02 03:06:18.868553 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:06:18.868570 | orchestrator | 2026-02-02 03:06:18.868589 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-02 03:06:18.868607 | orchestrator | Monday 02 February 2026 03:06:14 +0000 (0:00:01.297) 0:03:20.851 ******* 2026-02-02 03:06:18.868629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:06:18.868656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:06:18.868668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:06:18.868691 | orchestrator | 2026-02-02 03:06:18.868702 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-02 03:06:18.868714 | orchestrator | Monday 02 February 2026 03:06:18 +0000 (0:00:03.524) 0:03:24.376 ******* 2026-02-02 03:06:18.868740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-02 03:06:29.284663 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:29.284902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-02 03:06:29.284944 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:29.285000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-02 03:06:29.285027 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:29.285051 | orchestrator | 2026-02-02 03:06:29.285076 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-02 03:06:29.285136 | orchestrator | Monday 02 February 2026 03:06:18 +0000 (0:00:00.604) 0:03:24.980 ******* 2026-02-02 03:06:29.285163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-02 03:06:29.285210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-02 03:06:29.285228 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:29.285245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-02 03:06:29.285260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-02 03:06:29.285274 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:29.285288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-02 03:06:29.285305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-02 03:06:29.285321 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:29.285336 | orchestrator | 2026-02-02 03:06:29.285352 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-02 03:06:29.285368 | orchestrator | Monday 02 February 2026 03:06:19 +0000 (0:00:00.774) 0:03:25.755 ******* 2026-02-02 03:06:29.285384 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:06:29.285400 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:06:29.285414 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:06:29.285424 | orchestrator | 2026-02-02 03:06:29.285433 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-02 03:06:29.285443 | orchestrator | Monday 02 February 2026 03:06:21 +0000 (0:00:01.856) 0:03:27.612 ******* 2026-02-02 03:06:29.285453 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:06:29.285463 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:06:29.285497 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:06:29.285507 | orchestrator | 2026-02-02 03:06:29.285517 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-02 03:06:29.285528 | orchestrator | Monday 02 February 2026 03:06:23 +0000 (0:00:01.777) 0:03:29.389 ******* 2026-02-02 03:06:29.285539 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:06:29.285548 | orchestrator | 2026-02-02 03:06:29.285558 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-02 03:06:29.285568 | orchestrator | Monday 02 February 2026 03:06:24 +0000 (0:00:01.599) 0:03:30.988 ******* 2026-02-02 03:06:29.285580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:06:29.285614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:06:29.285626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:06:29.285645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:06:30.281160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:06:30.281234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:06:30.281270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:06:30.281276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:06:30.281281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:06:30.281285 | orchestrator | 2026-02-02 03:06:30.281290 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-02 03:06:30.281297 | orchestrator | Monday 02 February 2026 03:06:29 +0000 (0:00:04.405) 0:03:35.394 ******* 2026-02-02 03:06:30.281319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-02 03:06:30.281330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:06:30.281337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:06:30.281342 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:30.281347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-02 03:06:30.281355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:06:41.451539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:06:41.451636 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:41.451666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-02 03:06:41.451697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:06:41.451705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:06:41.451711 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:41.451718 | orchestrator | 2026-02-02 03:06:41.451726 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-02 03:06:41.451735 | orchestrator | Monday 02 February 2026 03:06:30 +0000 (0:00:01.007) 0:03:36.401 ******* 2026-02-02 03:06:41.451744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-02 03:06:41.451754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-02 03:06:41.451763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-02 03:06:41.451785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-02 03:06:41.451793 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:41.451800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-02 03:06:41.451807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-02 03:06:41.451819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-02 03:06:41.451825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-02 03:06:41.451832 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:41.451839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-02 03:06:41.451845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-02 03:06:41.451856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-02 03:06:41.451862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-02 03:06:41.451867 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:41.451873 | orchestrator | 2026-02-02 03:06:41.451879 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-02 03:06:41.451885 | orchestrator | Monday 02 February 2026 03:06:31 +0000 (0:00:01.257) 0:03:37.659 ******* 2026-02-02 03:06:41.451891 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:06:41.451897 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:06:41.451903 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:06:41.451909 | orchestrator | 2026-02-02 03:06:41.451915 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-02 03:06:41.451921 | orchestrator | Monday 02 February 2026 03:06:32 +0000 (0:00:01.354) 0:03:39.014 ******* 2026-02-02 03:06:41.451926 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:06:41.451932 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:06:41.451937 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:06:41.451943 | orchestrator | 2026-02-02 03:06:41.451949 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-02 03:06:41.451954 | orchestrator | Monday 02 February 2026 03:06:34 +0000 (0:00:02.111) 0:03:41.125 ******* 2026-02-02 03:06:41.451961 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:06:41.451966 | orchestrator | 2026-02-02 03:06:41.451972 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-02 03:06:41.451978 | orchestrator | Monday 02 February 2026 03:06:36 +0000 (0:00:01.616) 0:03:42.742 ******* 2026-02-02 03:06:41.451984 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-02 03:06:41.451992 | orchestrator | 2026-02-02 03:06:41.451998 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-02 03:06:41.452004 | orchestrator | Monday 02 February 2026 03:06:37 +0000 (0:00:00.853) 0:03:43.595 ******* 2026-02-02 03:06:41.452011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-02 03:06:41.452032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-02 03:06:53.402392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-02 03:06:53.402491 | orchestrator | 2026-02-02 03:06:53.402505 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-02 03:06:53.402515 | orchestrator | Monday 02 February 2026 03:06:41 +0000 (0:00:03.971) 0:03:47.567 ******* 2026-02-02 03:06:53.402526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 03:06:53.402534 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:53.402560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 03:06:53.402569 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:53.402578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 03:06:53.402586 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:53.402594 | orchestrator | 2026-02-02 03:06:53.402603 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-02 03:06:53.402612 | orchestrator | Monday 02 February 2026 03:06:42 +0000 (0:00:01.476) 0:03:49.043 ******* 2026-02-02 03:06:53.402621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 03:06:53.402633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 03:06:53.402661 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:53.402670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 03:06:53.402678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 03:06:53.402687 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:53.402695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 03:06:53.402703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 03:06:53.402725 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:53.402734 | orchestrator | 2026-02-02 03:06:53.402742 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-02 03:06:53.402750 | orchestrator | Monday 02 February 2026 03:06:44 +0000 (0:00:01.604) 0:03:50.648 ******* 2026-02-02 03:06:53.402758 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:06:53.402766 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:06:53.402774 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:06:53.402782 | orchestrator | 2026-02-02 03:06:53.402790 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-02 03:06:53.402798 | orchestrator | Monday 02 February 2026 03:06:46 +0000 (0:00:02.347) 0:03:52.995 ******* 2026-02-02 03:06:53.402806 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:06:53.402814 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:06:53.402822 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:06:53.402829 | orchestrator | 2026-02-02 03:06:53.402837 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-02 03:06:53.402845 | orchestrator | Monday 02 February 2026 03:06:49 +0000 (0:00:03.033) 0:03:56.029 ******* 2026-02-02 03:06:53.402854 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-02 03:06:53.402863 | orchestrator | 2026-02-02 03:06:53.402871 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-02 03:06:53.402879 | orchestrator | Monday 02 February 2026 03:06:51 +0000 (0:00:01.142) 0:03:57.172 ******* 2026-02-02 03:06:53.402892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 03:06:53.402901 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:53.402912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 03:06:53.402936 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:06:53.402946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 03:06:53.402956 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:06:53.402965 | orchestrator | 2026-02-02 03:06:53.402975 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-02 03:06:53.402984 | orchestrator | Monday 02 February 2026 03:06:52 +0000 (0:00:01.056) 0:03:58.228 ******* 2026-02-02 03:06:53.402994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 03:06:53.403003 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:06:53.403012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 03:06:53.403027 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:16.819313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 03:07:16.819428 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:16.819447 | orchestrator | 2026-02-02 03:07:16.819460 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-02 03:07:16.819474 | orchestrator | Monday 02 February 2026 03:06:53 +0000 (0:00:01.287) 0:03:59.516 ******* 2026-02-02 03:07:16.819486 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:16.819497 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:16.819509 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:16.819519 | orchestrator | 2026-02-02 03:07:16.819530 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-02 03:07:16.819541 | orchestrator | Monday 02 February 2026 03:06:54 +0000 (0:00:01.552) 0:04:01.069 ******* 2026-02-02 03:07:16.819552 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:07:16.819564 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:07:16.819575 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:07:16.819586 | orchestrator | 2026-02-02 03:07:16.819597 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-02 03:07:16.819608 | orchestrator | Monday 02 February 2026 03:06:57 +0000 (0:00:02.783) 0:04:03.852 ******* 2026-02-02 03:07:16.819646 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:07:16.819658 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:07:16.819669 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:07:16.819680 | orchestrator | 2026-02-02 03:07:16.819706 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-02 03:07:16.819717 | orchestrator | Monday 02 February 2026 03:07:00 +0000 (0:00:02.738) 0:04:06.591 ******* 2026-02-02 03:07:16.819729 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-02 03:07:16.819742 | orchestrator | 2026-02-02 03:07:16.819753 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-02 03:07:16.819764 | orchestrator | Monday 02 February 2026 03:07:01 +0000 (0:00:01.175) 0:04:07.767 ******* 2026-02-02 03:07:16.819776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 03:07:16.819788 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:16.819803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 03:07:16.819816 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:16.819830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 03:07:16.819842 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:16.819855 | orchestrator | 2026-02-02 03:07:16.819884 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-02 03:07:16.819910 | orchestrator | Monday 02 February 2026 03:07:02 +0000 (0:00:01.268) 0:04:09.035 ******* 2026-02-02 03:07:16.819941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 03:07:16.819956 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:16.819969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 03:07:16.819991 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:16.820024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 03:07:16.820037 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:16.820050 | orchestrator | 2026-02-02 03:07:16.820069 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-02 03:07:16.820083 | orchestrator | Monday 02 February 2026 03:07:04 +0000 (0:00:01.459) 0:04:10.495 ******* 2026-02-02 03:07:16.820096 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:16.820110 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:16.820122 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:16.820135 | orchestrator | 2026-02-02 03:07:16.820148 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-02 03:07:16.820159 | orchestrator | Monday 02 February 2026 03:07:06 +0000 (0:00:01.951) 0:04:12.447 ******* 2026-02-02 03:07:16.820170 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:07:16.820181 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:07:16.820191 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:07:16.820202 | orchestrator | 2026-02-02 03:07:16.820213 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-02 03:07:16.820224 | orchestrator | Monday 02 February 2026 03:07:08 +0000 (0:00:02.330) 0:04:14.777 ******* 2026-02-02 03:07:16.820235 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:07:16.820246 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:07:16.820257 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:07:16.820267 | orchestrator | 2026-02-02 03:07:16.820278 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-02 03:07:16.820290 | orchestrator | Monday 02 February 2026 03:07:11 +0000 (0:00:03.212) 0:04:17.990 ******* 2026-02-02 03:07:16.820300 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:07:16.820312 | orchestrator | 2026-02-02 03:07:16.820322 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-02 03:07:16.820333 | orchestrator | Monday 02 February 2026 03:07:13 +0000 (0:00:01.305) 0:04:19.296 ******* 2026-02-02 03:07:16.820346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 03:07:16.820360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 03:07:16.820388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 03:07:17.561993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 03:07:17.562228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:07:17.562256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 03:07:17.562274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 03:07:17.562293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 03:07:17.562344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 03:07:17.562389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:07:17.562410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 03:07:17.562427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 03:07:17.562444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 03:07:17.562502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 03:07:17.562539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:07:17.562559 | orchestrator | 2026-02-02 03:07:17.562579 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-02 03:07:17.562592 | orchestrator | Monday 02 February 2026 03:07:16 +0000 (0:00:03.774) 0:04:23.070 ******* 2026-02-02 03:07:17.562617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 03:07:17.729134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 03:07:17.729236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 03:07:17.729252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 03:07:17.729266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:07:17.729300 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:17.729315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 03:07:17.729328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 03:07:17.729403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 03:07:17.729418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 03:07:17.729429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:07:17.729448 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:17.729460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 03:07:17.729472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 03:07:17.729483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 03:07:17.729508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 03:07:29.314652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 03:07:29.314809 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:29.314828 | orchestrator | 2026-02-02 03:07:29.314841 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-02 03:07:29.314853 | orchestrator | Monday 02 February 2026 03:07:17 +0000 (0:00:00.780) 0:04:23.851 ******* 2026-02-02 03:07:29.314864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 03:07:29.314899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 03:07:29.314911 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:29.314922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 03:07:29.314932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 03:07:29.314942 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:29.314951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 03:07:29.314961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 03:07:29.314971 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:29.314980 | orchestrator | 2026-02-02 03:07:29.315022 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-02 03:07:29.315032 | orchestrator | Monday 02 February 2026 03:07:18 +0000 (0:00:00.909) 0:04:24.760 ******* 2026-02-02 03:07:29.315042 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:07:29.315052 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:07:29.315062 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:07:29.315072 | orchestrator | 2026-02-02 03:07:29.315082 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-02 03:07:29.315092 | orchestrator | Monday 02 February 2026 03:07:20 +0000 (0:00:01.778) 0:04:26.539 ******* 2026-02-02 03:07:29.315101 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:07:29.315111 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:07:29.315122 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:07:29.315132 | orchestrator | 2026-02-02 03:07:29.315142 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-02 03:07:29.315152 | orchestrator | Monday 02 February 2026 03:07:22 +0000 (0:00:02.085) 0:04:28.624 ******* 2026-02-02 03:07:29.315162 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:07:29.315172 | orchestrator | 2026-02-02 03:07:29.315182 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-02 03:07:29.315194 | orchestrator | Monday 02 February 2026 03:07:23 +0000 (0:00:01.391) 0:04:30.016 ******* 2026-02-02 03:07:29.315221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:07:29.315254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:07:29.315276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:07:29.315290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:07:29.315309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:07:29.315331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:07:31.365194 | orchestrator | 2026-02-02 03:07:31.366287 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-02 03:07:31.366350 | orchestrator | Monday 02 February 2026 03:07:29 +0000 (0:00:05.407) 0:04:35.423 ******* 2026-02-02 03:07:31.366369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-02 03:07:31.366389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-02 03:07:31.366402 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:31.366439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-02 03:07:31.366452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-02 03:07:31.366508 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:31.366518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-02 03:07:31.366526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-02 03:07:31.366532 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:31.366539 | orchestrator | 2026-02-02 03:07:31.366547 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-02 03:07:31.366558 | orchestrator | Monday 02 February 2026 03:07:30 +0000 (0:00:01.127) 0:04:36.551 ******* 2026-02-02 03:07:31.366568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-02 03:07:31.366576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-02 03:07:31.366586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-02 03:07:31.366599 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:31.366611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-02 03:07:31.366617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-02 03:07:31.366624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-02 03:07:31.366630 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:31.366637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-02 03:07:31.366643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-02 03:07:31.366660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-02 03:07:37.788059 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:37.788142 | orchestrator | 2026-02-02 03:07:37.788150 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-02 03:07:37.788158 | orchestrator | Monday 02 February 2026 03:07:31 +0000 (0:00:00.924) 0:04:37.476 ******* 2026-02-02 03:07:37.788164 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:37.788170 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:37.788176 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:37.788182 | orchestrator | 2026-02-02 03:07:37.788187 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-02 03:07:37.788201 | orchestrator | Monday 02 February 2026 03:07:31 +0000 (0:00:00.432) 0:04:37.908 ******* 2026-02-02 03:07:37.788207 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:37.788213 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:37.788225 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:37.788231 | orchestrator | 2026-02-02 03:07:37.788236 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-02 03:07:37.788242 | orchestrator | Monday 02 February 2026 03:07:33 +0000 (0:00:01.773) 0:04:39.682 ******* 2026-02-02 03:07:37.788248 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:07:37.788254 | orchestrator | 2026-02-02 03:07:37.788260 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-02 03:07:37.788265 | orchestrator | Monday 02 February 2026 03:07:35 +0000 (0:00:01.763) 0:04:41.446 ******* 2026-02-02 03:07:37.788274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-02 03:07:37.788302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 03:07:37.788319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:37.788326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:37.788332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 03:07:37.788350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-02 03:07:37.788356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 03:07:37.788362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:37.788372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:37.788378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 03:07:37.788386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-02 03:07:37.788392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 03:07:37.788403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:39.354963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:39.355111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 03:07:39.355147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-02 03:07:39.355168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-02 03:07:39.355176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:39.355183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:39.355204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 03:07:39.355211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-02 03:07:39.355224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-02 03:07:39.355235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-02 03:07:39.355242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:39.355255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-02 03:07:40.081935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:40.082111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:40.082125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 03:07:40.082144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:40.082148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 03:07:40.082153 | orchestrator | 2026-02-02 03:07:40.082158 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-02 03:07:40.082164 | orchestrator | Monday 02 February 2026 03:07:39 +0000 (0:00:04.190) 0:04:45.636 ******* 2026-02-02 03:07:40.082169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-02 03:07:40.082174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 03:07:40.082208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:40.082213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:40.082219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 03:07:40.082231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-02 03:07:40.082238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-02 03:07:40.082250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-02 03:07:40.274166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:40.274256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 03:07:40.274283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:40.274293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:40.274303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 03:07:40.274312 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:40.274323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:40.274333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 03:07:40.274375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-02 03:07:40.274387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-02 03:07:40.274400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:40.274409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:40.274418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 03:07:40.274426 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:40.274435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-02 03:07:40.274458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 03:07:42.294840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:42.295067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:42.295123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 03:07:42.295147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-02 03:07:42.295167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-02 03:07:42.295217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:42.295260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 03:07:42.295279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 03:07:42.295296 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:42.295315 | orchestrator | 2026-02-02 03:07:42.295333 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-02 03:07:42.295352 | orchestrator | Monday 02 February 2026 03:07:40 +0000 (0:00:00.903) 0:04:46.540 ******* 2026-02-02 03:07:42.295377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-02 03:07:42.295399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-02 03:07:42.295418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-02 03:07:42.295439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-02 03:07:42.295459 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:42.295475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-02 03:07:42.295517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-02 03:07:42.295535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-02 03:07:42.295553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-02 03:07:42.295569 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:42.295586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-02 03:07:42.295603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-02 03:07:42.295621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-02 03:07:42.295652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-02 03:07:49.968113 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:49.968209 | orchestrator | 2026-02-02 03:07:49.968220 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-02 03:07:49.968230 | orchestrator | Monday 02 February 2026 03:07:42 +0000 (0:00:01.864) 0:04:48.405 ******* 2026-02-02 03:07:49.968237 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:49.968244 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:49.968252 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:49.968259 | orchestrator | 2026-02-02 03:07:49.968266 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-02 03:07:49.968274 | orchestrator | Monday 02 February 2026 03:07:42 +0000 (0:00:00.505) 0:04:48.910 ******* 2026-02-02 03:07:49.968281 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:49.968288 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:49.968294 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:49.968301 | orchestrator | 2026-02-02 03:07:49.968308 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-02 03:07:49.968315 | orchestrator | Monday 02 February 2026 03:07:44 +0000 (0:00:01.469) 0:04:50.379 ******* 2026-02-02 03:07:49.968322 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:07:49.968329 | orchestrator | 2026-02-02 03:07:49.968336 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-02 03:07:49.968343 | orchestrator | Monday 02 February 2026 03:07:46 +0000 (0:00:01.759) 0:04:52.138 ******* 2026-02-02 03:07:49.968354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 03:07:49.968390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 03:07:49.968435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 03:07:49.968444 | orchestrator | 2026-02-02 03:07:49.968450 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-02 03:07:49.968472 | orchestrator | Monday 02 February 2026 03:07:48 +0000 (0:00:02.292) 0:04:54.431 ******* 2026-02-02 03:07:49.968479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 03:07:49.968497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 03:07:49.968505 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:49.968512 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:49.968519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 03:07:49.968525 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:49.968531 | orchestrator | 2026-02-02 03:07:49.968537 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-02 03:07:49.968544 | orchestrator | Monday 02 February 2026 03:07:48 +0000 (0:00:00.412) 0:04:54.843 ******* 2026-02-02 03:07:49.968551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-02 03:07:49.968559 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:07:49.968566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-02 03:07:49.968572 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:07:49.968579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-02 03:07:49.968585 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:07:49.968590 | orchestrator | 2026-02-02 03:07:49.968597 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-02 03:07:49.968603 | orchestrator | Monday 02 February 2026 03:07:49 +0000 (0:00:00.653) 0:04:55.496 ******* 2026-02-02 03:07:49.968616 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:00.293179 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:00.293291 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:00.293308 | orchestrator | 2026-02-02 03:08:00.293321 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-02 03:08:00.293335 | orchestrator | Monday 02 February 2026 03:07:50 +0000 (0:00:00.837) 0:04:56.334 ******* 2026-02-02 03:08:00.293346 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:00.293382 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:00.293394 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:00.293405 | orchestrator | 2026-02-02 03:08:00.293416 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-02 03:08:00.293427 | orchestrator | Monday 02 February 2026 03:07:51 +0000 (0:00:01.385) 0:04:57.720 ******* 2026-02-02 03:08:00.293438 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:08:00.293450 | orchestrator | 2026-02-02 03:08:00.293462 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-02 03:08:00.293473 | orchestrator | Monday 02 February 2026 03:07:53 +0000 (0:00:01.474) 0:04:59.195 ******* 2026-02-02 03:08:00.293503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 03:08:00.293521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 03:08:00.293533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 03:08:00.293564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 03:08:00.293593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 03:08:00.293605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 03:08:00.293617 | orchestrator | 2026-02-02 03:08:00.293629 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-02 03:08:00.293641 | orchestrator | Monday 02 February 2026 03:07:59 +0000 (0:00:06.049) 0:05:05.244 ******* 2026-02-02 03:08:00.293653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-02 03:08:00.293673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-02 03:08:06.141785 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:06.141921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-02 03:08:06.142003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-02 03:08:06.142075 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:06.142091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-02 03:08:06.142103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-02 03:08:06.142148 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:06.142161 | orchestrator | 2026-02-02 03:08:06.142190 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-02 03:08:06.143055 | orchestrator | Monday 02 February 2026 03:08:00 +0000 (0:00:01.168) 0:05:06.413 ******* 2026-02-02 03:08:06.143110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-02 03:08:06.143125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-02 03:08:06.143138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-02 03:08:06.143160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-02 03:08:06.143172 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:06.143183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-02 03:08:06.143195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-02 03:08:06.143206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-02 03:08:06.143217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-02 03:08:06.143228 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:06.143239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-02 03:08:06.143250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-02 03:08:06.143261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-02 03:08:06.143272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-02 03:08:06.143283 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:06.143294 | orchestrator | 2026-02-02 03:08:06.143319 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-02 03:08:06.143330 | orchestrator | Monday 02 February 2026 03:08:01 +0000 (0:00:00.940) 0:05:07.354 ******* 2026-02-02 03:08:06.143341 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:08:06.143352 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:08:06.143363 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:08:06.143374 | orchestrator | 2026-02-02 03:08:06.143385 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-02 03:08:06.143396 | orchestrator | Monday 02 February 2026 03:08:02 +0000 (0:00:01.359) 0:05:08.713 ******* 2026-02-02 03:08:06.143407 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:08:06.143417 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:08:06.143428 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:08:06.143439 | orchestrator | 2026-02-02 03:08:06.143450 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-02 03:08:06.143461 | orchestrator | Monday 02 February 2026 03:08:04 +0000 (0:00:02.190) 0:05:10.903 ******* 2026-02-02 03:08:06.143472 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:06.143483 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:06.143494 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:06.143504 | orchestrator | 2026-02-02 03:08:06.143515 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-02 03:08:06.143526 | orchestrator | Monday 02 February 2026 03:08:05 +0000 (0:00:00.676) 0:05:11.580 ******* 2026-02-02 03:08:06.143537 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:06.143548 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:06.143559 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:06.143570 | orchestrator | 2026-02-02 03:08:06.143580 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-02 03:08:06.143592 | orchestrator | Monday 02 February 2026 03:08:05 +0000 (0:00:00.360) 0:05:11.941 ******* 2026-02-02 03:08:06.143602 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:06.143621 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:50.645762 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:50.645854 | orchestrator | 2026-02-02 03:08:50.645908 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-02 03:08:50.645916 | orchestrator | Monday 02 February 2026 03:08:06 +0000 (0:00:00.322) 0:05:12.264 ******* 2026-02-02 03:08:50.645921 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:50.645927 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:50.645932 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:50.645937 | orchestrator | 2026-02-02 03:08:50.645942 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-02 03:08:50.645947 | orchestrator | Monday 02 February 2026 03:08:06 +0000 (0:00:00.333) 0:05:12.598 ******* 2026-02-02 03:08:50.645953 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:50.645958 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:50.645962 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:50.645967 | orchestrator | 2026-02-02 03:08:50.645972 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-02 03:08:50.645990 | orchestrator | Monday 02 February 2026 03:08:07 +0000 (0:00:00.671) 0:05:13.269 ******* 2026-02-02 03:08:50.645996 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:50.646001 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:50.646006 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:50.646011 | orchestrator | 2026-02-02 03:08:50.646042 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-02 03:08:50.646047 | orchestrator | Monday 02 February 2026 03:08:07 +0000 (0:00:00.534) 0:05:13.803 ******* 2026-02-02 03:08:50.646052 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:08:50.646059 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:08:50.646063 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:08:50.646068 | orchestrator | 2026-02-02 03:08:50.646073 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-02 03:08:50.646094 | orchestrator | Monday 02 February 2026 03:08:08 +0000 (0:00:00.640) 0:05:14.444 ******* 2026-02-02 03:08:50.646099 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:08:50.646104 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:08:50.646109 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:08:50.646113 | orchestrator | 2026-02-02 03:08:50.646118 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-02 03:08:50.646123 | orchestrator | Monday 02 February 2026 03:08:08 +0000 (0:00:00.394) 0:05:14.839 ******* 2026-02-02 03:08:50.646128 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:08:50.646133 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:08:50.646137 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:08:50.646142 | orchestrator | 2026-02-02 03:08:50.646147 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-02 03:08:50.646152 | orchestrator | Monday 02 February 2026 03:08:09 +0000 (0:00:01.246) 0:05:16.085 ******* 2026-02-02 03:08:50.646156 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:08:50.646161 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:08:50.646166 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:08:50.646170 | orchestrator | 2026-02-02 03:08:50.646177 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-02 03:08:50.646184 | orchestrator | Monday 02 February 2026 03:08:10 +0000 (0:00:00.841) 0:05:16.927 ******* 2026-02-02 03:08:50.646192 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:08:50.646200 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:08:50.646207 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:08:50.646215 | orchestrator | 2026-02-02 03:08:50.646222 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-02 03:08:50.646229 | orchestrator | Monday 02 February 2026 03:08:11 +0000 (0:00:00.896) 0:05:17.823 ******* 2026-02-02 03:08:50.646236 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:08:50.646244 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:08:50.646250 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:08:50.646258 | orchestrator | 2026-02-02 03:08:50.646278 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-02 03:08:50.646287 | orchestrator | Monday 02 February 2026 03:08:20 +0000 (0:00:08.398) 0:05:26.221 ******* 2026-02-02 03:08:50.646295 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:08:50.646302 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:08:50.646311 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:08:50.646319 | orchestrator | 2026-02-02 03:08:50.646327 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-02 03:08:50.646337 | orchestrator | Monday 02 February 2026 03:08:21 +0000 (0:00:01.200) 0:05:27.422 ******* 2026-02-02 03:08:50.646346 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:08:50.646354 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:08:50.646362 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:08:50.646372 | orchestrator | 2026-02-02 03:08:50.646381 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-02 03:08:50.646390 | orchestrator | Monday 02 February 2026 03:08:32 +0000 (0:00:10.895) 0:05:38.317 ******* 2026-02-02 03:08:50.646398 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:08:50.646406 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:08:50.646413 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:08:50.646421 | orchestrator | 2026-02-02 03:08:50.646429 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-02 03:08:50.646437 | orchestrator | Monday 02 February 2026 03:08:36 +0000 (0:00:04.754) 0:05:43.072 ******* 2026-02-02 03:08:50.646444 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:08:50.646452 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:08:50.646459 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:08:50.646468 | orchestrator | 2026-02-02 03:08:50.646477 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-02 03:08:50.646485 | orchestrator | Monday 02 February 2026 03:08:41 +0000 (0:00:04.241) 0:05:47.313 ******* 2026-02-02 03:08:50.646505 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:50.646513 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:50.646521 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:50.646528 | orchestrator | 2026-02-02 03:08:50.646537 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-02 03:08:50.646542 | orchestrator | Monday 02 February 2026 03:08:41 +0000 (0:00:00.696) 0:05:48.010 ******* 2026-02-02 03:08:50.646547 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:50.646552 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:50.646557 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:50.646561 | orchestrator | 2026-02-02 03:08:50.646582 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-02 03:08:50.646587 | orchestrator | Monday 02 February 2026 03:08:42 +0000 (0:00:00.391) 0:05:48.401 ******* 2026-02-02 03:08:50.646592 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:50.646597 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:50.646601 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:50.646606 | orchestrator | 2026-02-02 03:08:50.646611 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-02 03:08:50.646616 | orchestrator | Monday 02 February 2026 03:08:42 +0000 (0:00:00.364) 0:05:48.766 ******* 2026-02-02 03:08:50.646621 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:50.646625 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:50.646630 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:50.646635 | orchestrator | 2026-02-02 03:08:50.646640 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-02 03:08:50.646645 | orchestrator | Monday 02 February 2026 03:08:43 +0000 (0:00:00.374) 0:05:49.141 ******* 2026-02-02 03:08:50.646649 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:50.646659 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:50.646664 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:50.646669 | orchestrator | 2026-02-02 03:08:50.646674 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-02 03:08:50.646682 | orchestrator | Monday 02 February 2026 03:08:43 +0000 (0:00:00.750) 0:05:49.891 ******* 2026-02-02 03:08:50.646689 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:08:50.646697 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:08:50.646705 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:08:50.646713 | orchestrator | 2026-02-02 03:08:50.646720 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-02 03:08:50.646727 | orchestrator | Monday 02 February 2026 03:08:44 +0000 (0:00:00.348) 0:05:50.239 ******* 2026-02-02 03:08:50.646735 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:08:50.646743 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:08:50.646752 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:08:50.646759 | orchestrator | 2026-02-02 03:08:50.646767 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-02 03:08:50.646774 | orchestrator | Monday 02 February 2026 03:08:48 +0000 (0:00:04.807) 0:05:55.047 ******* 2026-02-02 03:08:50.646782 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:08:50.646790 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:08:50.646798 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:08:50.646804 | orchestrator | 2026-02-02 03:08:50.646809 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:08:50.646815 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-02 03:08:50.646822 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-02 03:08:50.646827 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-02 03:08:50.646832 | orchestrator | 2026-02-02 03:08:50.646842 | orchestrator | 2026-02-02 03:08:50.646847 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:08:50.646852 | orchestrator | Monday 02 February 2026 03:08:49 +0000 (0:00:00.834) 0:05:55.881 ******* 2026-02-02 03:08:50.646901 | orchestrator | =============================================================================== 2026-02-02 03:08:50.646912 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.90s 2026-02-02 03:08:50.646920 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.40s 2026-02-02 03:08:50.646929 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.05s 2026-02-02 03:08:50.646937 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.41s 2026-02-02 03:08:50.646945 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.81s 2026-02-02 03:08:50.646953 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.75s 2026-02-02 03:08:50.646961 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.41s 2026-02-02 03:08:50.646969 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.24s 2026-02-02 03:08:50.646974 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.19s 2026-02-02 03:08:50.646979 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.18s 2026-02-02 03:08:50.646984 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.06s 2026-02-02 03:08:50.646988 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.97s 2026-02-02 03:08:50.646993 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.84s 2026-02-02 03:08:50.646998 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.77s 2026-02-02 03:08:50.647002 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.71s 2026-02-02 03:08:50.647007 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.56s 2026-02-02 03:08:50.647012 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.52s 2026-02-02 03:08:50.647017 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.48s 2026-02-02 03:08:50.647021 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.46s 2026-02-02 03:08:50.647026 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.32s 2026-02-02 03:08:53.096945 | orchestrator | 2026-02-02 03:08:53 | INFO  | Task 6d225b3a-06fa-4d69-a25a-8b5febdc1ddc (opensearch) was prepared for execution. 2026-02-02 03:08:53.097017 | orchestrator | 2026-02-02 03:08:53 | INFO  | It takes a moment until task 6d225b3a-06fa-4d69-a25a-8b5febdc1ddc (opensearch) has been started and output is visible here. 2026-02-02 03:09:04.380734 | orchestrator | 2026-02-02 03:09:04.380935 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 03:09:04.380969 | orchestrator | 2026-02-02 03:09:04.380990 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 03:09:04.381012 | orchestrator | Monday 02 February 2026 03:08:57 +0000 (0:00:00.267) 0:00:00.268 ******* 2026-02-02 03:09:04.381033 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:09:04.381053 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:09:04.381072 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:09:04.381091 | orchestrator | 2026-02-02 03:09:04.381110 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 03:09:04.381129 | orchestrator | Monday 02 February 2026 03:08:58 +0000 (0:00:00.332) 0:00:00.600 ******* 2026-02-02 03:09:04.381168 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-02 03:09:04.381190 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-02 03:09:04.381208 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-02 03:09:04.381227 | orchestrator | 2026-02-02 03:09:04.381246 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-02 03:09:04.381300 | orchestrator | 2026-02-02 03:09:04.381322 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-02 03:09:04.381344 | orchestrator | Monday 02 February 2026 03:08:58 +0000 (0:00:00.441) 0:00:01.042 ******* 2026-02-02 03:09:04.381366 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:09:04.381387 | orchestrator | 2026-02-02 03:09:04.381408 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-02 03:09:04.381430 | orchestrator | Monday 02 February 2026 03:08:58 +0000 (0:00:00.491) 0:00:01.534 ******* 2026-02-02 03:09:04.381451 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 03:09:04.381472 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 03:09:04.381494 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 03:09:04.381515 | orchestrator | 2026-02-02 03:09:04.381536 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-02 03:09:04.381558 | orchestrator | Monday 02 February 2026 03:08:59 +0000 (0:00:00.675) 0:00:02.210 ******* 2026-02-02 03:09:04.381583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:09:04.381610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:09:04.381657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:09:04.381693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:09:04.381732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:09:04.381755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:09:04.381771 | orchestrator | 2026-02-02 03:09:04.381782 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-02 03:09:04.381794 | orchestrator | Monday 02 February 2026 03:09:01 +0000 (0:00:01.681) 0:00:03.891 ******* 2026-02-02 03:09:04.381805 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:09:04.381816 | orchestrator | 2026-02-02 03:09:04.381827 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-02 03:09:04.381838 | orchestrator | Monday 02 February 2026 03:09:01 +0000 (0:00:00.567) 0:00:04.458 ******* 2026-02-02 03:09:04.381909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:09:05.267492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:09:05.267581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:09:05.267592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:09:05.267601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:09:05.267660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:09:05.267670 | orchestrator | 2026-02-02 03:09:05.267679 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-02 03:09:05.267686 | orchestrator | Monday 02 February 2026 03:09:04 +0000 (0:00:02.467) 0:00:06.926 ******* 2026-02-02 03:09:05.267695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-02 03:09:05.267701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-02 03:09:05.267707 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:09:05.267714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-02 03:09:05.267737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-02 03:09:06.349391 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:09:06.349516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-02 03:09:06.349547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-02 03:09:06.349569 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:09:06.349589 | orchestrator | 2026-02-02 03:09:06.349611 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-02 03:09:06.349631 | orchestrator | Monday 02 February 2026 03:09:05 +0000 (0:00:00.887) 0:00:07.814 ******* 2026-02-02 03:09:06.349720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-02 03:09:06.349761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-02 03:09:06.349805 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:09:06.349825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-02 03:09:06.349877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-02 03:09:06.349899 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:09:06.349935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-02 03:09:06.349968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-02 03:09:06.349992 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:09:06.350013 | orchestrator | 2026-02-02 03:09:06.350111 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-02 03:09:06.350145 | orchestrator | Monday 02 February 2026 03:09:06 +0000 (0:00:01.079) 0:00:08.893 ******* 2026-02-02 03:09:14.314240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:09:14.314315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:09:14.314322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:09:14.314355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:09:14.314372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:09:14.314377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:09:14.314386 | orchestrator | 2026-02-02 03:09:14.314391 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-02 03:09:14.314396 | orchestrator | Monday 02 February 2026 03:09:08 +0000 (0:00:02.218) 0:00:11.112 ******* 2026-02-02 03:09:14.314400 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:09:14.314405 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:09:14.314409 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:09:14.314413 | orchestrator | 2026-02-02 03:09:14.314417 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-02 03:09:14.314420 | orchestrator | Monday 02 February 2026 03:09:10 +0000 (0:00:02.252) 0:00:13.365 ******* 2026-02-02 03:09:14.314424 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:09:14.314428 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:09:14.314432 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:09:14.314435 | orchestrator | 2026-02-02 03:09:14.314439 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-02 03:09:14.314443 | orchestrator | Monday 02 February 2026 03:09:12 +0000 (0:00:01.893) 0:00:15.258 ******* 2026-02-02 03:09:14.314447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:09:14.314455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:09:14.314463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-02 03:11:48.563349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:11:48.563520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:11:48.563567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-02 03:11:48.563587 | orchestrator | 2026-02-02 03:11:48.563605 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-02 03:11:48.563623 | orchestrator | Monday 02 February 2026 03:09:14 +0000 (0:00:01.604) 0:00:16.862 ******* 2026-02-02 03:11:48.563730 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:11:48.563747 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:11:48.563762 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:11:48.563776 | orchestrator | 2026-02-02 03:11:48.563818 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-02 03:11:48.563835 | orchestrator | Monday 02 February 2026 03:09:14 +0000 (0:00:00.294) 0:00:17.156 ******* 2026-02-02 03:11:48.563851 | orchestrator | 2026-02-02 03:11:48.563862 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-02 03:11:48.563873 | orchestrator | Monday 02 February 2026 03:09:14 +0000 (0:00:00.077) 0:00:17.234 ******* 2026-02-02 03:11:48.563883 | orchestrator | 2026-02-02 03:11:48.563894 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-02 03:11:48.563917 | orchestrator | Monday 02 February 2026 03:09:14 +0000 (0:00:00.067) 0:00:17.301 ******* 2026-02-02 03:11:48.563927 | orchestrator | 2026-02-02 03:11:48.563938 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-02 03:11:48.563968 | orchestrator | Monday 02 February 2026 03:09:14 +0000 (0:00:00.070) 0:00:17.372 ******* 2026-02-02 03:11:48.563979 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:11:48.563989 | orchestrator | 2026-02-02 03:11:48.563999 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-02 03:11:48.564008 | orchestrator | Monday 02 February 2026 03:09:15 +0000 (0:00:00.192) 0:00:17.564 ******* 2026-02-02 03:11:48.564017 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:11:48.564025 | orchestrator | 2026-02-02 03:11:48.564034 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-02 03:11:48.564043 | orchestrator | Monday 02 February 2026 03:09:15 +0000 (0:00:00.718) 0:00:18.283 ******* 2026-02-02 03:11:48.564052 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:11:48.564061 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:11:48.564069 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:11:48.564078 | orchestrator | 2026-02-02 03:11:48.564087 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-02 03:11:48.564095 | orchestrator | Monday 02 February 2026 03:10:21 +0000 (0:01:05.930) 0:01:24.214 ******* 2026-02-02 03:11:48.564104 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:11:48.564113 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:11:48.564121 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:11:48.564130 | orchestrator | 2026-02-02 03:11:48.564139 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-02 03:11:48.564148 | orchestrator | Monday 02 February 2026 03:11:38 +0000 (0:01:16.704) 0:02:40.918 ******* 2026-02-02 03:11:48.564157 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:11:48.564166 | orchestrator | 2026-02-02 03:11:48.564175 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-02 03:11:48.564184 | orchestrator | Monday 02 February 2026 03:11:38 +0000 (0:00:00.533) 0:02:41.452 ******* 2026-02-02 03:11:48.564192 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:11:48.564201 | orchestrator | 2026-02-02 03:11:48.564210 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-02 03:11:48.564219 | orchestrator | Monday 02 February 2026 03:11:41 +0000 (0:00:02.422) 0:02:43.875 ******* 2026-02-02 03:11:48.564227 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:11:48.564236 | orchestrator | 2026-02-02 03:11:48.564245 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-02 03:11:48.564254 | orchestrator | Monday 02 February 2026 03:11:43 +0000 (0:00:02.119) 0:02:45.994 ******* 2026-02-02 03:11:48.564262 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:11:48.564271 | orchestrator | 2026-02-02 03:11:48.564280 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-02 03:11:48.564289 | orchestrator | Monday 02 February 2026 03:11:46 +0000 (0:00:02.563) 0:02:48.557 ******* 2026-02-02 03:11:48.564298 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:11:48.564307 | orchestrator | 2026-02-02 03:11:48.564316 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:11:48.564326 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 03:11:48.564336 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 03:11:48.564358 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 03:11:48.564373 | orchestrator | 2026-02-02 03:11:48.564388 | orchestrator | 2026-02-02 03:11:48.564413 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:11:48.564429 | orchestrator | Monday 02 February 2026 03:11:48 +0000 (0:00:02.532) 0:02:51.090 ******* 2026-02-02 03:11:48.564444 | orchestrator | =============================================================================== 2026-02-02 03:11:48.564460 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 76.70s 2026-02-02 03:11:48.564474 | orchestrator | opensearch : Restart opensearch container ------------------------------ 65.93s 2026-02-02 03:11:48.564488 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.56s 2026-02-02 03:11:48.564504 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.53s 2026-02-02 03:11:48.564518 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.47s 2026-02-02 03:11:48.564533 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.42s 2026-02-02 03:11:48.564548 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.25s 2026-02-02 03:11:48.564563 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.22s 2026-02-02 03:11:48.564578 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.12s 2026-02-02 03:11:48.564592 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.89s 2026-02-02 03:11:48.564606 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.68s 2026-02-02 03:11:48.564657 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.60s 2026-02-02 03:11:48.564673 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.08s 2026-02-02 03:11:48.564686 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.89s 2026-02-02 03:11:48.564700 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.72s 2026-02-02 03:11:48.564713 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2026-02-02 03:11:48.564737 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-02-02 03:11:48.929784 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-02-02 03:11:48.929885 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-02-02 03:11:48.929901 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-02-02 03:11:51.553133 | orchestrator | 2026-02-02 03:11:51 | INFO  | Task 30b23f57-d048-49b9-a3af-7349a293a464 (memcached) was prepared for execution. 2026-02-02 03:11:51.553246 | orchestrator | 2026-02-02 03:11:51 | INFO  | It takes a moment until task 30b23f57-d048-49b9-a3af-7349a293a464 (memcached) has been started and output is visible here. 2026-02-02 03:12:04.038416 | orchestrator | 2026-02-02 03:12:04.038498 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 03:12:04.038508 | orchestrator | 2026-02-02 03:12:04.038516 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 03:12:04.038523 | orchestrator | Monday 02 February 2026 03:11:56 +0000 (0:00:00.288) 0:00:00.288 ******* 2026-02-02 03:12:04.038530 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:12:04.038538 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:12:04.038544 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:12:04.038550 | orchestrator | 2026-02-02 03:12:04.038557 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 03:12:04.038572 | orchestrator | Monday 02 February 2026 03:11:56 +0000 (0:00:00.319) 0:00:00.608 ******* 2026-02-02 03:12:04.038585 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-02 03:12:04.038598 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-02 03:12:04.038608 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-02 03:12:04.038634 | orchestrator | 2026-02-02 03:12:04.038644 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-02 03:12:04.038680 | orchestrator | 2026-02-02 03:12:04.038691 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-02 03:12:04.038702 | orchestrator | Monday 02 February 2026 03:11:56 +0000 (0:00:00.438) 0:00:01.046 ******* 2026-02-02 03:12:04.038714 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:12:04.038725 | orchestrator | 2026-02-02 03:12:04.038736 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-02 03:12:04.038743 | orchestrator | Monday 02 February 2026 03:11:57 +0000 (0:00:00.498) 0:00:01.544 ******* 2026-02-02 03:12:04.038750 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-02 03:12:04.038757 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-02 03:12:04.038763 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-02 03:12:04.038769 | orchestrator | 2026-02-02 03:12:04.038775 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-02 03:12:04.038782 | orchestrator | Monday 02 February 2026 03:11:58 +0000 (0:00:00.678) 0:00:02.223 ******* 2026-02-02 03:12:04.038788 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-02 03:12:04.038795 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-02 03:12:04.038801 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-02 03:12:04.038807 | orchestrator | 2026-02-02 03:12:04.038813 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-02 03:12:04.038820 | orchestrator | Monday 02 February 2026 03:11:59 +0000 (0:00:01.752) 0:00:03.975 ******* 2026-02-02 03:12:04.038846 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:12:04.038852 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:12:04.038859 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:12:04.038865 | orchestrator | 2026-02-02 03:12:04.038871 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-02 03:12:04.038878 | orchestrator | Monday 02 February 2026 03:12:01 +0000 (0:00:01.467) 0:00:05.442 ******* 2026-02-02 03:12:04.038884 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:12:04.038890 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:12:04.038897 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:12:04.038903 | orchestrator | 2026-02-02 03:12:04.038909 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:12:04.038916 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:12:04.038924 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:12:04.038930 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:12:04.038936 | orchestrator | 2026-02-02 03:12:04.038942 | orchestrator | 2026-02-02 03:12:04.038949 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:12:04.038955 | orchestrator | Monday 02 February 2026 03:12:03 +0000 (0:00:02.342) 0:00:07.784 ******* 2026-02-02 03:12:04.038961 | orchestrator | =============================================================================== 2026-02-02 03:12:04.038967 | orchestrator | memcached : Restart memcached container --------------------------------- 2.34s 2026-02-02 03:12:04.038974 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.75s 2026-02-02 03:12:04.038981 | orchestrator | memcached : Check memcached container ----------------------------------- 1.47s 2026-02-02 03:12:04.038987 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.68s 2026-02-02 03:12:04.038993 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.50s 2026-02-02 03:12:04.038999 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-02-02 03:12:04.039006 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-02-02 03:12:06.562543 | orchestrator | 2026-02-02 03:12:06 | INFO  | Task 28229315-9227-4dac-9867-5cd57cb0aa7f (redis) was prepared for execution. 2026-02-02 03:12:06.562684 | orchestrator | 2026-02-02 03:12:06 | INFO  | It takes a moment until task 28229315-9227-4dac-9867-5cd57cb0aa7f (redis) has been started and output is visible here. 2026-02-02 03:12:15.309031 | orchestrator | 2026-02-02 03:12:15.309140 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 03:12:15.309156 | orchestrator | 2026-02-02 03:12:15.309167 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 03:12:15.309178 | orchestrator | Monday 02 February 2026 03:12:10 +0000 (0:00:00.257) 0:00:00.257 ******* 2026-02-02 03:12:15.309188 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:12:15.309199 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:12:15.309208 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:12:15.309218 | orchestrator | 2026-02-02 03:12:15.309228 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 03:12:15.309238 | orchestrator | Monday 02 February 2026 03:12:11 +0000 (0:00:00.290) 0:00:00.547 ******* 2026-02-02 03:12:15.309248 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-02 03:12:15.309258 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-02 03:12:15.309268 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-02 03:12:15.309278 | orchestrator | 2026-02-02 03:12:15.309288 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-02 03:12:15.309298 | orchestrator | 2026-02-02 03:12:15.309308 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-02 03:12:15.309318 | orchestrator | Monday 02 February 2026 03:12:11 +0000 (0:00:00.407) 0:00:00.955 ******* 2026-02-02 03:12:15.309328 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:12:15.309339 | orchestrator | 2026-02-02 03:12:15.309349 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-02 03:12:15.309360 | orchestrator | Monday 02 February 2026 03:12:12 +0000 (0:00:00.517) 0:00:01.472 ******* 2026-02-02 03:12:15.309373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 03:12:15.309390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 03:12:15.309402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 03:12:15.309444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 03:12:15.309473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 03:12:15.309485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 03:12:15.309495 | orchestrator | 2026-02-02 03:12:15.309505 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-02 03:12:15.309515 | orchestrator | Monday 02 February 2026 03:12:13 +0000 (0:00:01.011) 0:00:02.484 ******* 2026-02-02 03:12:15.309525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 03:12:15.309694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 03:12:15.309721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 03:12:15.309749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 03:12:15.309776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 03:12:19.090772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 03:12:19.090902 | orchestrator | 2026-02-02 03:12:19.090920 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-02 03:12:19.090932 | orchestrator | Monday 02 February 2026 03:12:15 +0000 (0:00:02.203) 0:00:04.688 ******* 2026-02-02 03:12:19.090944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 03:12:19.090981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 03:12:19.091001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 03:12:19.091061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 03:12:19.091081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 03:12:19.091120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 03:12:19.091138 | orchestrator | 2026-02-02 03:12:19.091154 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-02 03:12:19.091170 | orchestrator | Monday 02 February 2026 03:12:17 +0000 (0:00:02.275) 0:00:06.964 ******* 2026-02-02 03:12:19.091187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 03:12:19.091205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 03:12:19.091229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 03:12:19.091255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 03:12:19.091269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 03:12:19.091291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 03:12:25.530010 | orchestrator | 2026-02-02 03:12:25.530180 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-02 03:12:25.530195 | orchestrator | Monday 02 February 2026 03:12:18 +0000 (0:00:01.303) 0:00:08.267 ******* 2026-02-02 03:12:25.530204 | orchestrator | 2026-02-02 03:12:25.530213 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-02 03:12:25.530223 | orchestrator | Monday 02 February 2026 03:12:18 +0000 (0:00:00.070) 0:00:08.338 ******* 2026-02-02 03:12:25.530232 | orchestrator | 2026-02-02 03:12:25.530241 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-02 03:12:25.530249 | orchestrator | Monday 02 February 2026 03:12:19 +0000 (0:00:00.065) 0:00:08.403 ******* 2026-02-02 03:12:25.530259 | orchestrator | 2026-02-02 03:12:25.530268 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-02 03:12:25.530276 | orchestrator | Monday 02 February 2026 03:12:19 +0000 (0:00:00.067) 0:00:08.471 ******* 2026-02-02 03:12:25.530285 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:12:25.530296 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:12:25.530304 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:12:25.530314 | orchestrator | 2026-02-02 03:12:25.530323 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-02 03:12:25.530331 | orchestrator | Monday 02 February 2026 03:12:21 +0000 (0:00:02.701) 0:00:11.172 ******* 2026-02-02 03:12:25.530368 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:12:25.530378 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:12:25.530387 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:12:25.530397 | orchestrator | 2026-02-02 03:12:25.530406 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:12:25.530416 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:12:25.530427 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:12:25.530459 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:12:25.530468 | orchestrator | 2026-02-02 03:12:25.530477 | orchestrator | 2026-02-02 03:12:25.530487 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:12:25.530497 | orchestrator | Monday 02 February 2026 03:12:25 +0000 (0:00:03.354) 0:00:14.527 ******* 2026-02-02 03:12:25.530505 | orchestrator | =============================================================================== 2026-02-02 03:12:25.530514 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.35s 2026-02-02 03:12:25.530524 | orchestrator | redis : Restart redis container ----------------------------------------- 2.70s 2026-02-02 03:12:25.530532 | orchestrator | redis : Copying over redis config files --------------------------------- 2.28s 2026-02-02 03:12:25.530542 | orchestrator | redis : Copying over default config.json files -------------------------- 2.20s 2026-02-02 03:12:25.530552 | orchestrator | redis : Check redis containers ------------------------------------------ 1.30s 2026-02-02 03:12:25.530561 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.01s 2026-02-02 03:12:25.530570 | orchestrator | redis : include_tasks --------------------------------------------------- 0.52s 2026-02-02 03:12:25.530579 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-02-02 03:12:25.530616 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-02-02 03:12:25.530627 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.20s 2026-02-02 03:12:28.134240 | orchestrator | 2026-02-02 03:12:28 | INFO  | Task e6160050-88e0-4cdc-b8da-f5b326b2e050 (mariadb) was prepared for execution. 2026-02-02 03:12:28.134334 | orchestrator | 2026-02-02 03:12:28 | INFO  | It takes a moment until task e6160050-88e0-4cdc-b8da-f5b326b2e050 (mariadb) has been started and output is visible here. 2026-02-02 03:12:42.595384 | orchestrator | 2026-02-02 03:12:42.595489 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 03:12:42.595507 | orchestrator | 2026-02-02 03:12:42.595519 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 03:12:42.595531 | orchestrator | Monday 02 February 2026 03:12:32 +0000 (0:00:00.187) 0:00:00.187 ******* 2026-02-02 03:12:42.595541 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:12:42.595553 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:12:42.595564 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:12:42.595626 | orchestrator | 2026-02-02 03:12:42.595637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 03:12:42.595649 | orchestrator | Monday 02 February 2026 03:12:33 +0000 (0:00:00.294) 0:00:00.481 ******* 2026-02-02 03:12:42.595726 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-02 03:12:42.595737 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-02 03:12:42.595747 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-02 03:12:42.595757 | orchestrator | 2026-02-02 03:12:42.595767 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-02 03:12:42.595778 | orchestrator | 2026-02-02 03:12:42.595789 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-02 03:12:42.595830 | orchestrator | Monday 02 February 2026 03:12:33 +0000 (0:00:00.571) 0:00:01.052 ******* 2026-02-02 03:12:42.595842 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 03:12:42.595854 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-02 03:12:42.595916 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-02 03:12:42.595928 | orchestrator | 2026-02-02 03:12:42.595940 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-02 03:12:42.595952 | orchestrator | Monday 02 February 2026 03:12:33 +0000 (0:00:00.379) 0:00:01.431 ******* 2026-02-02 03:12:42.595964 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:12:42.595977 | orchestrator | 2026-02-02 03:12:42.595989 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-02 03:12:42.596001 | orchestrator | Monday 02 February 2026 03:12:34 +0000 (0:00:00.524) 0:00:01.956 ******* 2026-02-02 03:12:42.596035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 03:12:42.596074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 03:12:42.596111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 03:12:42.596124 | orchestrator | 2026-02-02 03:12:42.596135 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-02 03:12:42.596146 | orchestrator | Monday 02 February 2026 03:12:37 +0000 (0:00:02.728) 0:00:04.685 ******* 2026-02-02 03:12:42.596157 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:12:42.596170 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:12:42.596180 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:12:42.596191 | orchestrator | 2026-02-02 03:12:42.596202 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-02 03:12:42.596214 | orchestrator | Monday 02 February 2026 03:12:38 +0000 (0:00:00.755) 0:00:05.441 ******* 2026-02-02 03:12:42.596225 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:12:42.596235 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:12:42.596246 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:12:42.596257 | orchestrator | 2026-02-02 03:12:42.596268 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-02 03:12:42.596279 | orchestrator | Monday 02 February 2026 03:12:39 +0000 (0:00:01.472) 0:00:06.913 ******* 2026-02-02 03:12:42.596300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 03:12:50.324045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 03:12:50.324144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 03:12:50.324178 | orchestrator | 2026-02-02 03:12:50.324190 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-02 03:12:50.324201 | orchestrator | Monday 02 February 2026 03:12:42 +0000 (0:00:03.112) 0:00:10.025 ******* 2026-02-02 03:12:50.324209 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:12:50.324218 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:12:50.324226 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:12:50.324234 | orchestrator | 2026-02-02 03:12:50.324243 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-02 03:12:50.324264 | orchestrator | Monday 02 February 2026 03:12:43 +0000 (0:00:01.068) 0:00:11.094 ******* 2026-02-02 03:12:50.324272 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:12:50.324280 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:12:50.324301 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:12:50.324310 | orchestrator | 2026-02-02 03:12:50.324319 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-02 03:12:50.324327 | orchestrator | Monday 02 February 2026 03:12:47 +0000 (0:00:03.769) 0:00:14.863 ******* 2026-02-02 03:12:50.324335 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:12:50.324344 | orchestrator | 2026-02-02 03:12:50.324352 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-02 03:12:50.324360 | orchestrator | Monday 02 February 2026 03:12:47 +0000 (0:00:00.559) 0:00:15.423 ******* 2026-02-02 03:12:50.324374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:12:50.324392 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:12:50.324407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:12:55.312191 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:12:55.312344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:12:55.312405 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:12:55.312416 | orchestrator | 2026-02-02 03:12:55.312427 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-02 03:12:55.312438 | orchestrator | Monday 02 February 2026 03:12:50 +0000 (0:00:02.329) 0:00:17.752 ******* 2026-02-02 03:12:55.312449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:12:55.312459 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:12:55.312491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:12:55.312510 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:12:55.312520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:12:55.312529 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:12:55.312538 | orchestrator | 2026-02-02 03:12:55.312547 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-02 03:12:55.312600 | orchestrator | Monday 02 February 2026 03:12:52 +0000 (0:00:02.565) 0:00:20.318 ******* 2026-02-02 03:12:55.312632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:12:58.091965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:12:58.092053 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:12:58.092064 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:12:58.092086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 03:12:58.092112 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:12:58.092120 | orchestrator | 2026-02-02 03:12:58.092128 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-02 03:12:58.092137 | orchestrator | Monday 02 February 2026 03:12:55 +0000 (0:00:02.424) 0:00:22.743 ******* 2026-02-02 03:12:58.092157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 03:12:58.092166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 03:12:58.092183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 03:15:16.749395 | orchestrator | 2026-02-02 03:15:16.749545 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-02 03:15:16.749564 | orchestrator | Monday 02 February 2026 03:12:58 +0000 (0:00:02.778) 0:00:25.521 ******* 2026-02-02 03:15:16.749578 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:15:16.749592 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:15:16.749605 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:15:16.749619 | orchestrator | 2026-02-02 03:15:16.749632 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-02 03:15:16.749646 | orchestrator | Monday 02 February 2026 03:12:58 +0000 (0:00:00.804) 0:00:26.326 ******* 2026-02-02 03:15:16.749660 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:15:16.749674 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:15:16.749687 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:15:16.749700 | orchestrator | 2026-02-02 03:15:16.749712 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-02 03:15:16.749724 | orchestrator | Monday 02 February 2026 03:12:59 +0000 (0:00:00.524) 0:00:26.850 ******* 2026-02-02 03:15:16.749737 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:15:16.749749 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:15:16.749762 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:15:16.749774 | orchestrator | 2026-02-02 03:15:16.749787 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-02 03:15:16.749800 | orchestrator | Monday 02 February 2026 03:12:59 +0000 (0:00:00.309) 0:00:27.160 ******* 2026-02-02 03:15:16.749831 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-02 03:15:16.749869 | orchestrator | ...ignoring 2026-02-02 03:15:16.749884 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-02 03:15:16.749910 | orchestrator | ...ignoring 2026-02-02 03:15:16.749923 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-02 03:15:16.749937 | orchestrator | ...ignoring 2026-02-02 03:15:16.749976 | orchestrator | 2026-02-02 03:15:16.749990 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-02 03:15:16.750003 | orchestrator | Monday 02 February 2026 03:13:10 +0000 (0:00:10.922) 0:00:38.082 ******* 2026-02-02 03:15:16.750092 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:15:16.750108 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:15:16.750122 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:15:16.750135 | orchestrator | 2026-02-02 03:15:16.750160 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-02 03:15:16.750174 | orchestrator | Monday 02 February 2026 03:13:11 +0000 (0:00:00.450) 0:00:38.533 ******* 2026-02-02 03:15:16.750188 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:16.750201 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:15:16.750215 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:15:16.750228 | orchestrator | 2026-02-02 03:15:16.750240 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-02 03:15:16.750254 | orchestrator | Monday 02 February 2026 03:13:11 +0000 (0:00:00.661) 0:00:39.195 ******* 2026-02-02 03:15:16.750267 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:16.750280 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:15:16.750293 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:15:16.750306 | orchestrator | 2026-02-02 03:15:16.750336 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-02 03:15:16.750350 | orchestrator | Monday 02 February 2026 03:13:12 +0000 (0:00:00.415) 0:00:39.610 ******* 2026-02-02 03:15:16.750364 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:16.750376 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:15:16.750389 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:15:16.750402 | orchestrator | 2026-02-02 03:15:16.750415 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-02 03:15:16.750483 | orchestrator | Monday 02 February 2026 03:13:12 +0000 (0:00:00.432) 0:00:40.043 ******* 2026-02-02 03:15:16.750496 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:15:16.750508 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:15:16.750521 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:15:16.750533 | orchestrator | 2026-02-02 03:15:16.750546 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-02 03:15:16.750579 | orchestrator | Monday 02 February 2026 03:13:13 +0000 (0:00:00.416) 0:00:40.459 ******* 2026-02-02 03:15:16.750592 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:16.750605 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:15:16.750618 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:15:16.750631 | orchestrator | 2026-02-02 03:15:16.750644 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-02 03:15:16.750673 | orchestrator | Monday 02 February 2026 03:13:13 +0000 (0:00:00.634) 0:00:41.093 ******* 2026-02-02 03:15:16.750686 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:15:16.750698 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:15:16.750711 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-02 03:15:16.750723 | orchestrator | 2026-02-02 03:15:16.750735 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-02 03:15:16.750748 | orchestrator | Monday 02 February 2026 03:13:14 +0000 (0:00:00.419) 0:00:41.513 ******* 2026-02-02 03:15:16.750761 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:15:16.750774 | orchestrator | 2026-02-02 03:15:16.750787 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-02 03:15:16.750800 | orchestrator | Monday 02 February 2026 03:13:24 +0000 (0:00:10.205) 0:00:51.718 ******* 2026-02-02 03:15:16.750813 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:15:16.750825 | orchestrator | 2026-02-02 03:15:16.750838 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-02 03:15:16.750851 | orchestrator | Monday 02 February 2026 03:13:24 +0000 (0:00:00.160) 0:00:51.879 ******* 2026-02-02 03:15:16.750864 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:16.750916 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:15:16.750930 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:15:16.750942 | orchestrator | 2026-02-02 03:15:16.750956 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-02 03:15:16.750968 | orchestrator | Monday 02 February 2026 03:13:25 +0000 (0:00:01.007) 0:00:52.886 ******* 2026-02-02 03:15:16.750981 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:15:16.750993 | orchestrator | 2026-02-02 03:15:16.751006 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-02 03:15:16.751019 | orchestrator | Monday 02 February 2026 03:13:33 +0000 (0:00:08.252) 0:01:01.138 ******* 2026-02-02 03:15:16.751032 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:15:16.751044 | orchestrator | 2026-02-02 03:15:16.751057 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-02 03:15:16.751071 | orchestrator | Monday 02 February 2026 03:13:35 +0000 (0:00:01.601) 0:01:02.740 ******* 2026-02-02 03:15:16.751084 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:15:16.751097 | orchestrator | 2026-02-02 03:15:16.751110 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-02 03:15:16.751122 | orchestrator | Monday 02 February 2026 03:13:37 +0000 (0:00:02.540) 0:01:05.281 ******* 2026-02-02 03:15:16.751135 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:15:16.751149 | orchestrator | 2026-02-02 03:15:16.751162 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-02 03:15:16.751175 | orchestrator | Monday 02 February 2026 03:13:37 +0000 (0:00:00.116) 0:01:05.398 ******* 2026-02-02 03:15:16.751188 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:16.751200 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:15:16.751213 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:15:16.751227 | orchestrator | 2026-02-02 03:15:16.751239 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-02 03:15:16.751252 | orchestrator | Monday 02 February 2026 03:13:38 +0000 (0:00:00.369) 0:01:05.767 ******* 2026-02-02 03:15:16.751264 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:16.751277 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-02 03:15:16.751290 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:15:16.751303 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:15:16.751315 | orchestrator | 2026-02-02 03:15:16.751328 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-02 03:15:16.751340 | orchestrator | skipping: no hosts matched 2026-02-02 03:15:16.751353 | orchestrator | 2026-02-02 03:15:16.751365 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-02 03:15:16.751378 | orchestrator | 2026-02-02 03:15:16.751390 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-02 03:15:16.751403 | orchestrator | Monday 02 February 2026 03:13:38 +0000 (0:00:00.618) 0:01:06.386 ******* 2026-02-02 03:15:16.751416 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:15:16.751452 | orchestrator | 2026-02-02 03:15:16.751464 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-02 03:15:16.751478 | orchestrator | Monday 02 February 2026 03:13:57 +0000 (0:00:18.853) 0:01:25.240 ******* 2026-02-02 03:15:16.751490 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:15:16.751503 | orchestrator | 2026-02-02 03:15:16.751516 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-02 03:15:16.751528 | orchestrator | Monday 02 February 2026 03:14:14 +0000 (0:00:16.570) 0:01:41.810 ******* 2026-02-02 03:15:16.751541 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:15:16.751554 | orchestrator | 2026-02-02 03:15:16.751573 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-02 03:15:16.751586 | orchestrator | 2026-02-02 03:15:16.751608 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-02 03:15:16.751620 | orchestrator | Monday 02 February 2026 03:14:16 +0000 (0:00:02.418) 0:01:44.229 ******* 2026-02-02 03:15:16.751642 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:15:16.751654 | orchestrator | 2026-02-02 03:15:16.751666 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-02 03:15:16.751679 | orchestrator | Monday 02 February 2026 03:14:35 +0000 (0:00:19.180) 0:02:03.410 ******* 2026-02-02 03:15:16.751691 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:15:16.751703 | orchestrator | 2026-02-02 03:15:16.751716 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-02 03:15:16.751728 | orchestrator | Monday 02 February 2026 03:14:52 +0000 (0:00:16.564) 0:02:19.974 ******* 2026-02-02 03:15:16.751740 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:15:16.751753 | orchestrator | 2026-02-02 03:15:16.751765 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-02 03:15:16.751777 | orchestrator | 2026-02-02 03:15:16.751789 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-02 03:15:16.751801 | orchestrator | Monday 02 February 2026 03:14:55 +0000 (0:00:02.677) 0:02:22.651 ******* 2026-02-02 03:15:16.751813 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:15:16.751826 | orchestrator | 2026-02-02 03:15:16.751839 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-02 03:15:16.751852 | orchestrator | Monday 02 February 2026 03:15:07 +0000 (0:00:12.526) 0:02:35.178 ******* 2026-02-02 03:15:16.751863 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:15:16.751876 | orchestrator | 2026-02-02 03:15:16.751888 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-02 03:15:16.751901 | orchestrator | Monday 02 February 2026 03:15:13 +0000 (0:00:05.559) 0:02:40.737 ******* 2026-02-02 03:15:16.751913 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:15:16.752002 | orchestrator | 2026-02-02 03:15:16.752014 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-02 03:15:16.752027 | orchestrator | 2026-02-02 03:15:16.752040 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-02 03:15:16.752053 | orchestrator | Monday 02 February 2026 03:15:16 +0000 (0:00:02.734) 0:02:43.472 ******* 2026-02-02 03:15:16.752066 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:15:16.752079 | orchestrator | 2026-02-02 03:15:16.752091 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-02 03:15:16.752117 | orchestrator | Monday 02 February 2026 03:15:16 +0000 (0:00:00.704) 0:02:44.176 ******* 2026-02-02 03:15:29.609283 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:15:29.609397 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:15:29.609458 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:15:29.609469 | orchestrator | 2026-02-02 03:15:29.609481 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-02 03:15:29.609493 | orchestrator | Monday 02 February 2026 03:15:18 +0000 (0:00:02.194) 0:02:46.371 ******* 2026-02-02 03:15:29.609503 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:15:29.609513 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:15:29.609523 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:15:29.609533 | orchestrator | 2026-02-02 03:15:29.609544 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-02 03:15:29.609554 | orchestrator | Monday 02 February 2026 03:15:21 +0000 (0:00:02.077) 0:02:48.449 ******* 2026-02-02 03:15:29.609564 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:15:29.609573 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:15:29.609583 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:15:29.609593 | orchestrator | 2026-02-02 03:15:29.609603 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-02 03:15:29.609613 | orchestrator | Monday 02 February 2026 03:15:23 +0000 (0:00:02.379) 0:02:50.828 ******* 2026-02-02 03:15:29.609623 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:15:29.609633 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:15:29.609643 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:15:29.609652 | orchestrator | 2026-02-02 03:15:29.609689 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-02 03:15:29.609700 | orchestrator | Monday 02 February 2026 03:15:25 +0000 (0:00:02.224) 0:02:53.052 ******* 2026-02-02 03:15:29.609710 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:15:29.609726 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:15:29.609742 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:15:29.609760 | orchestrator | 2026-02-02 03:15:29.609777 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-02 03:15:29.609794 | orchestrator | Monday 02 February 2026 03:15:28 +0000 (0:00:03.211) 0:02:56.264 ******* 2026-02-02 03:15:29.609810 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:29.609846 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:15:29.609865 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:15:29.609880 | orchestrator | 2026-02-02 03:15:29.609895 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:15:29.609914 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-02 03:15:29.609933 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-02 03:15:29.609950 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-02 03:15:29.609967 | orchestrator | 2026-02-02 03:15:29.609982 | orchestrator | 2026-02-02 03:15:29.609998 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:15:29.610013 | orchestrator | Monday 02 February 2026 03:15:29 +0000 (0:00:00.219) 0:02:56.483 ******* 2026-02-02 03:15:29.610102 | orchestrator | =============================================================================== 2026-02-02 03:15:29.610140 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.03s 2026-02-02 03:15:29.610158 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.13s 2026-02-02 03:15:29.610173 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.53s 2026-02-02 03:15:29.610189 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.92s 2026-02-02 03:15:29.610206 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.21s 2026-02-02 03:15:29.610222 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.25s 2026-02-02 03:15:29.610238 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.56s 2026-02-02 03:15:29.610256 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.10s 2026-02-02 03:15:29.610271 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.77s 2026-02-02 03:15:29.610288 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.21s 2026-02-02 03:15:29.610305 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.11s 2026-02-02 03:15:29.610321 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.78s 2026-02-02 03:15:29.610337 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.73s 2026-02-02 03:15:29.610353 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.73s 2026-02-02 03:15:29.610368 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.57s 2026-02-02 03:15:29.610385 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.54s 2026-02-02 03:15:29.610401 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.42s 2026-02-02 03:15:29.610446 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.38s 2026-02-02 03:15:29.610463 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.33s 2026-02-02 03:15:29.610479 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.22s 2026-02-02 03:15:32.431167 | orchestrator | 2026-02-02 03:15:32 | INFO  | Task 3ad45647-c06d-4a4a-9e8e-31053528a958 (rabbitmq) was prepared for execution. 2026-02-02 03:15:32.431265 | orchestrator | 2026-02-02 03:15:32 | INFO  | It takes a moment until task 3ad45647-c06d-4a4a-9e8e-31053528a958 (rabbitmq) has been started and output is visible here. 2026-02-02 03:15:46.085968 | orchestrator | 2026-02-02 03:15:46.086143 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 03:15:46.086163 | orchestrator | 2026-02-02 03:15:46.086175 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 03:15:46.086187 | orchestrator | Monday 02 February 2026 03:15:36 +0000 (0:00:00.175) 0:00:00.175 ******* 2026-02-02 03:15:46.086198 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:15:46.086208 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:15:46.086219 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:15:46.086228 | orchestrator | 2026-02-02 03:15:46.086238 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 03:15:46.086249 | orchestrator | Monday 02 February 2026 03:15:37 +0000 (0:00:00.317) 0:00:00.493 ******* 2026-02-02 03:15:46.086260 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-02 03:15:46.086271 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-02 03:15:46.086281 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-02 03:15:46.086293 | orchestrator | 2026-02-02 03:15:46.086303 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-02 03:15:46.086314 | orchestrator | 2026-02-02 03:15:46.086325 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-02 03:15:46.086334 | orchestrator | Monday 02 February 2026 03:15:37 +0000 (0:00:00.558) 0:00:01.051 ******* 2026-02-02 03:15:46.086345 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:15:46.086357 | orchestrator | 2026-02-02 03:15:46.086367 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-02 03:15:46.086377 | orchestrator | Monday 02 February 2026 03:15:38 +0000 (0:00:00.545) 0:00:01.597 ******* 2026-02-02 03:15:46.086387 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:15:46.086423 | orchestrator | 2026-02-02 03:15:46.086434 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-02 03:15:46.086445 | orchestrator | Monday 02 February 2026 03:15:39 +0000 (0:00:00.993) 0:00:02.590 ******* 2026-02-02 03:15:46.086455 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:46.086465 | orchestrator | 2026-02-02 03:15:46.086476 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-02 03:15:46.086487 | orchestrator | Monday 02 February 2026 03:15:39 +0000 (0:00:00.388) 0:00:02.979 ******* 2026-02-02 03:15:46.086498 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:46.086509 | orchestrator | 2026-02-02 03:15:46.086521 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-02 03:15:46.086536 | orchestrator | Monday 02 February 2026 03:15:40 +0000 (0:00:00.379) 0:00:03.358 ******* 2026-02-02 03:15:46.086549 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:46.086560 | orchestrator | 2026-02-02 03:15:46.086571 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-02 03:15:46.086582 | orchestrator | Monday 02 February 2026 03:15:40 +0000 (0:00:00.373) 0:00:03.732 ******* 2026-02-02 03:15:46.086596 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:46.086611 | orchestrator | 2026-02-02 03:15:46.086625 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-02 03:15:46.086638 | orchestrator | Monday 02 February 2026 03:15:41 +0000 (0:00:00.642) 0:00:04.374 ******* 2026-02-02 03:15:46.086669 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:15:46.086707 | orchestrator | 2026-02-02 03:15:46.086720 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-02 03:15:46.086731 | orchestrator | Monday 02 February 2026 03:15:42 +0000 (0:00:00.868) 0:00:05.243 ******* 2026-02-02 03:15:46.086742 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:15:46.086752 | orchestrator | 2026-02-02 03:15:46.086763 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-02 03:15:46.086773 | orchestrator | Monday 02 February 2026 03:15:42 +0000 (0:00:00.828) 0:00:06.072 ******* 2026-02-02 03:15:46.086784 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:46.086793 | orchestrator | 2026-02-02 03:15:46.086810 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-02 03:15:46.086820 | orchestrator | Monday 02 February 2026 03:15:43 +0000 (0:00:00.361) 0:00:06.433 ******* 2026-02-02 03:15:46.086833 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:15:46.086846 | orchestrator | 2026-02-02 03:15:46.086855 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-02 03:15:46.086865 | orchestrator | Monday 02 February 2026 03:15:43 +0000 (0:00:00.390) 0:00:06.824 ******* 2026-02-02 03:15:46.086906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 03:15:46.086922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 03:15:46.086936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 03:15:46.086959 | orchestrator | 2026-02-02 03:15:46.086978 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-02 03:15:46.086989 | orchestrator | Monday 02 February 2026 03:15:44 +0000 (0:00:00.806) 0:00:07.631 ******* 2026-02-02 03:15:46.087000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 03:15:46.087022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 03:16:04.285154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 03:16:04.285281 | orchestrator | 2026-02-02 03:16:04.285312 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-02 03:16:04.285327 | orchestrator | Monday 02 February 2026 03:15:46 +0000 (0:00:01.657) 0:00:09.288 ******* 2026-02-02 03:16:04.285362 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-02 03:16:04.285424 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-02 03:16:04.285438 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-02 03:16:04.285449 | orchestrator | 2026-02-02 03:16:04.285461 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-02 03:16:04.285471 | orchestrator | Monday 02 February 2026 03:15:47 +0000 (0:00:01.382) 0:00:10.671 ******* 2026-02-02 03:16:04.285497 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-02 03:16:04.285509 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-02 03:16:04.285521 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-02 03:16:04.285531 | orchestrator | 2026-02-02 03:16:04.285542 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-02 03:16:04.285553 | orchestrator | Monday 02 February 2026 03:15:49 +0000 (0:00:01.734) 0:00:12.406 ******* 2026-02-02 03:16:04.285564 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-02 03:16:04.285575 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-02 03:16:04.285586 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-02 03:16:04.285597 | orchestrator | 2026-02-02 03:16:04.285607 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-02 03:16:04.285618 | orchestrator | Monday 02 February 2026 03:15:50 +0000 (0:00:01.329) 0:00:13.735 ******* 2026-02-02 03:16:04.285629 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-02 03:16:04.285642 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-02 03:16:04.285654 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-02 03:16:04.285667 | orchestrator | 2026-02-02 03:16:04.285680 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-02 03:16:04.285693 | orchestrator | Monday 02 February 2026 03:15:52 +0000 (0:00:01.872) 0:00:15.608 ******* 2026-02-02 03:16:04.285705 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-02 03:16:04.285719 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-02 03:16:04.285736 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-02 03:16:04.285756 | orchestrator | 2026-02-02 03:16:04.285768 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-02 03:16:04.285800 | orchestrator | Monday 02 February 2026 03:15:53 +0000 (0:00:01.347) 0:00:16.956 ******* 2026-02-02 03:16:04.285824 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-02 03:16:04.285838 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-02 03:16:04.285851 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-02 03:16:04.285876 | orchestrator | 2026-02-02 03:16:04.285899 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-02 03:16:04.285912 | orchestrator | Monday 02 February 2026 03:15:55 +0000 (0:00:01.351) 0:00:18.307 ******* 2026-02-02 03:16:04.285926 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:16:04.285954 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:16:04.285986 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:16:04.286010 | orchestrator | 2026-02-02 03:16:04.286082 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-02 03:16:04.286094 | orchestrator | Monday 02 February 2026 03:15:55 +0000 (0:00:00.448) 0:00:18.756 ******* 2026-02-02 03:16:04.286107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 03:16:04.286127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 03:16:04.286141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 03:16:04.286153 | orchestrator | 2026-02-02 03:16:04.286165 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-02 03:16:04.286177 | orchestrator | Monday 02 February 2026 03:15:56 +0000 (0:00:01.172) 0:00:19.928 ******* 2026-02-02 03:16:04.286196 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:16:04.286214 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:16:04.286242 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:16:04.286262 | orchestrator | 2026-02-02 03:16:04.286281 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-02 03:16:04.286310 | orchestrator | Monday 02 February 2026 03:15:57 +0000 (0:00:00.789) 0:00:20.718 ******* 2026-02-02 03:16:04.286328 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:16:04.286345 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:16:04.286364 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:16:04.286461 | orchestrator | 2026-02-02 03:16:04.286484 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-02 03:16:04.286517 | orchestrator | Monday 02 February 2026 03:16:04 +0000 (0:00:06.769) 0:00:27.487 ******* 2026-02-02 03:17:37.175258 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:17:37.175421 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:17:37.175435 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:17:37.175442 | orchestrator | 2026-02-02 03:17:37.175450 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-02 03:17:37.175459 | orchestrator | 2026-02-02 03:17:37.175466 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-02 03:17:37.175475 | orchestrator | Monday 02 February 2026 03:16:04 +0000 (0:00:00.550) 0:00:28.038 ******* 2026-02-02 03:17:37.175482 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:17:37.175489 | orchestrator | 2026-02-02 03:17:37.175496 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-02 03:17:37.175504 | orchestrator | Monday 02 February 2026 03:16:05 +0000 (0:00:00.576) 0:00:28.615 ******* 2026-02-02 03:17:37.175510 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:17:37.175517 | orchestrator | 2026-02-02 03:17:37.175524 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-02 03:17:37.175531 | orchestrator | Monday 02 February 2026 03:16:05 +0000 (0:00:00.261) 0:00:28.877 ******* 2026-02-02 03:17:37.175538 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:17:37.175545 | orchestrator | 2026-02-02 03:17:37.175552 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-02 03:17:37.175558 | orchestrator | Monday 02 February 2026 03:16:07 +0000 (0:00:01.580) 0:00:30.457 ******* 2026-02-02 03:17:37.175566 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:17:37.175573 | orchestrator | 2026-02-02 03:17:37.175579 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-02 03:17:37.175586 | orchestrator | 2026-02-02 03:17:37.175592 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-02 03:17:37.175599 | orchestrator | Monday 02 February 2026 03:17:01 +0000 (0:00:54.017) 0:01:24.475 ******* 2026-02-02 03:17:37.175606 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:17:37.175612 | orchestrator | 2026-02-02 03:17:37.175619 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-02 03:17:37.175625 | orchestrator | Monday 02 February 2026 03:17:01 +0000 (0:00:00.573) 0:01:25.049 ******* 2026-02-02 03:17:37.175632 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:17:37.175638 | orchestrator | 2026-02-02 03:17:37.175643 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-02 03:17:37.175649 | orchestrator | Monday 02 February 2026 03:17:02 +0000 (0:00:00.227) 0:01:25.276 ******* 2026-02-02 03:17:37.175655 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:17:37.175661 | orchestrator | 2026-02-02 03:17:37.175668 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-02 03:17:37.175691 | orchestrator | Monday 02 February 2026 03:17:03 +0000 (0:00:01.554) 0:01:26.831 ******* 2026-02-02 03:17:37.175698 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:17:37.175704 | orchestrator | 2026-02-02 03:17:37.175710 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-02 03:17:37.175715 | orchestrator | 2026-02-02 03:17:37.175721 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-02 03:17:37.175728 | orchestrator | Monday 02 February 2026 03:17:17 +0000 (0:00:13.686) 0:01:40.517 ******* 2026-02-02 03:17:37.175735 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:17:37.175741 | orchestrator | 2026-02-02 03:17:37.175768 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-02 03:17:37.175775 | orchestrator | Monday 02 February 2026 03:17:18 +0000 (0:00:00.802) 0:01:41.320 ******* 2026-02-02 03:17:37.175780 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:17:37.175786 | orchestrator | 2026-02-02 03:17:37.175792 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-02 03:17:37.175798 | orchestrator | Monday 02 February 2026 03:17:18 +0000 (0:00:00.269) 0:01:41.589 ******* 2026-02-02 03:17:37.175804 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:17:37.175812 | orchestrator | 2026-02-02 03:17:37.175819 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-02 03:17:37.175825 | orchestrator | Monday 02 February 2026 03:17:20 +0000 (0:00:01.649) 0:01:43.238 ******* 2026-02-02 03:17:37.175832 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:17:37.175838 | orchestrator | 2026-02-02 03:17:37.175844 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-02 03:17:37.175850 | orchestrator | 2026-02-02 03:17:37.175856 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-02 03:17:37.175863 | orchestrator | Monday 02 February 2026 03:17:33 +0000 (0:00:13.945) 0:01:57.184 ******* 2026-02-02 03:17:37.175869 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:17:37.175876 | orchestrator | 2026-02-02 03:17:37.175882 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-02 03:17:37.175889 | orchestrator | Monday 02 February 2026 03:17:34 +0000 (0:00:00.516) 0:01:57.700 ******* 2026-02-02 03:17:37.175896 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-02 03:17:37.175903 | orchestrator | enable_outward_rabbitmq_True 2026-02-02 03:17:37.175909 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-02 03:17:37.175915 | orchestrator | outward_rabbitmq_restart 2026-02-02 03:17:37.175922 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:17:37.175928 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:17:37.175934 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:17:37.175941 | orchestrator | 2026-02-02 03:17:37.175949 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-02 03:17:37.175956 | orchestrator | skipping: no hosts matched 2026-02-02 03:17:37.175963 | orchestrator | 2026-02-02 03:17:37.175970 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-02 03:17:37.175977 | orchestrator | skipping: no hosts matched 2026-02-02 03:17:37.175984 | orchestrator | 2026-02-02 03:17:37.175992 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-02 03:17:37.175998 | orchestrator | skipping: no hosts matched 2026-02-02 03:17:37.176005 | orchestrator | 2026-02-02 03:17:37.176012 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:17:37.176101 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-02 03:17:37.176117 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:17:37.176123 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:17:37.176128 | orchestrator | 2026-02-02 03:17:37.176133 | orchestrator | 2026-02-02 03:17:37.176139 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:17:37.176145 | orchestrator | Monday 02 February 2026 03:17:36 +0000 (0:00:02.254) 0:01:59.955 ******* 2026-02-02 03:17:37.176151 | orchestrator | =============================================================================== 2026-02-02 03:17:37.176156 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.65s 2026-02-02 03:17:37.176162 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.77s 2026-02-02 03:17:37.176178 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 4.78s 2026-02-02 03:17:37.176184 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.25s 2026-02-02 03:17:37.176191 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.95s 2026-02-02 03:17:37.176197 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.87s 2026-02-02 03:17:37.176204 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.73s 2026-02-02 03:17:37.176212 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.66s 2026-02-02 03:17:37.176220 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.38s 2026-02-02 03:17:37.176226 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.35s 2026-02-02 03:17:37.176232 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.35s 2026-02-02 03:17:37.176238 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.33s 2026-02-02 03:17:37.176245 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.17s 2026-02-02 03:17:37.176251 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.99s 2026-02-02 03:17:37.176264 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.87s 2026-02-02 03:17:37.176270 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.83s 2026-02-02 03:17:37.176276 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.81s 2026-02-02 03:17:37.176282 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.79s 2026-02-02 03:17:37.176288 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.76s 2026-02-02 03:17:37.176294 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.64s 2026-02-02 03:17:39.710727 | orchestrator | 2026-02-02 03:17:39 | INFO  | Task dde0e21f-f53d-42ba-8dfe-9d1cd486e9a4 (openvswitch) was prepared for execution. 2026-02-02 03:17:39.710861 | orchestrator | 2026-02-02 03:17:39 | INFO  | It takes a moment until task dde0e21f-f53d-42ba-8dfe-9d1cd486e9a4 (openvswitch) has been started and output is visible here. 2026-02-02 03:17:52.900227 | orchestrator | 2026-02-02 03:17:52.900386 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 03:17:52.900401 | orchestrator | 2026-02-02 03:17:52.900409 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 03:17:52.900418 | orchestrator | Monday 02 February 2026 03:17:44 +0000 (0:00:00.293) 0:00:00.293 ******* 2026-02-02 03:17:52.900436 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:17:52.900445 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:17:52.900453 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:17:52.900461 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:17:52.900469 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:17:52.900477 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:17:52.900485 | orchestrator | 2026-02-02 03:17:52.900493 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 03:17:52.900501 | orchestrator | Monday 02 February 2026 03:17:44 +0000 (0:00:00.683) 0:00:00.977 ******* 2026-02-02 03:17:52.900509 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 03:17:52.900518 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 03:17:52.900526 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 03:17:52.900534 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 03:17:52.900542 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 03:17:52.900550 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 03:17:52.900558 | orchestrator | 2026-02-02 03:17:52.900587 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-02 03:17:52.900595 | orchestrator | 2026-02-02 03:17:52.900604 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-02 03:17:52.900612 | orchestrator | Monday 02 February 2026 03:17:45 +0000 (0:00:00.646) 0:00:01.623 ******* 2026-02-02 03:17:52.900621 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:17:52.900630 | orchestrator | 2026-02-02 03:17:52.900638 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-02 03:17:52.900646 | orchestrator | Monday 02 February 2026 03:17:46 +0000 (0:00:01.208) 0:00:02.831 ******* 2026-02-02 03:17:52.900654 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-02 03:17:52.900662 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-02 03:17:52.900670 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-02 03:17:52.900678 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-02 03:17:52.900686 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-02 03:17:52.900694 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-02 03:17:52.900702 | orchestrator | 2026-02-02 03:17:52.900710 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-02 03:17:52.900718 | orchestrator | Monday 02 February 2026 03:17:47 +0000 (0:00:01.199) 0:00:04.031 ******* 2026-02-02 03:17:52.900725 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-02 03:17:52.900733 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-02 03:17:52.900741 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-02 03:17:52.900749 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-02 03:17:52.900757 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-02 03:17:52.900765 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-02 03:17:52.900773 | orchestrator | 2026-02-02 03:17:52.900782 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-02 03:17:52.900792 | orchestrator | Monday 02 February 2026 03:17:49 +0000 (0:00:01.574) 0:00:05.606 ******* 2026-02-02 03:17:52.900801 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-02 03:17:52.900810 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:17:52.900820 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-02 03:17:52.900829 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:17:52.900839 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-02 03:17:52.900848 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:17:52.900857 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-02 03:17:52.900866 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:17:52.900875 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-02 03:17:52.900884 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:17:52.900893 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-02 03:17:52.900902 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:17:52.900912 | orchestrator | 2026-02-02 03:17:52.900921 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-02 03:17:52.900929 | orchestrator | Monday 02 February 2026 03:17:50 +0000 (0:00:01.273) 0:00:06.880 ******* 2026-02-02 03:17:52.900937 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:17:52.900945 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:17:52.900953 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:17:52.900961 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:17:52.900969 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:17:52.900977 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:17:52.900985 | orchestrator | 2026-02-02 03:17:52.900993 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-02 03:17:52.901007 | orchestrator | Monday 02 February 2026 03:17:51 +0000 (0:00:00.777) 0:00:07.657 ******* 2026-02-02 03:17:52.901033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:52.901046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:52.901055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:52.901134 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:52.901153 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:52.901169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:17:55.181031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:17:55.181140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:55.181157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:17:55.181169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:17:55.181197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:17:55.181253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:17:55.181275 | orchestrator | 2026-02-02 03:17:55.181402 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-02 03:17:55.181427 | orchestrator | Monday 02 February 2026 03:17:52 +0000 (0:00:01.468) 0:00:09.125 ******* 2026-02-02 03:17:55.181447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:55.181465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:55.181485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:55.181504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:55.181550 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:55.181588 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:57.980841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:17:57.980975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:17:57.980989 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:17:57.981065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:17:57.981094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:17:57.981118 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:17:57.981128 | orchestrator | 2026-02-02 03:17:57.981138 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-02 03:17:57.981149 | orchestrator | Monday 02 February 2026 03:17:55 +0000 (0:00:02.295) 0:00:11.421 ******* 2026-02-02 03:17:57.981157 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:17:57.981167 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:17:57.981175 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:17:57.981183 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:17:57.981191 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:17:57.981199 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:17:57.981208 | orchestrator | 2026-02-02 03:17:57.981216 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-02 03:17:57.981224 | orchestrator | Monday 02 February 2026 03:17:56 +0000 (0:00:01.011) 0:00:12.432 ******* 2026-02-02 03:17:57.981233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:57.981243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:57.981262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:57.981271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:17:57.981342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:18:23.610360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 03:18:23.610469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:18:23.610483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:18:23.610535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:18:23.610545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:18:23.610569 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:18:23.610578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 03:18:23.610585 | orchestrator | 2026-02-02 03:18:23.610594 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 03:18:23.610601 | orchestrator | Monday 02 February 2026 03:17:58 +0000 (0:00:01.768) 0:00:14.200 ******* 2026-02-02 03:18:23.610608 | orchestrator | 2026-02-02 03:18:23.610615 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 03:18:23.610623 | orchestrator | Monday 02 February 2026 03:17:58 +0000 (0:00:00.329) 0:00:14.529 ******* 2026-02-02 03:18:23.610635 | orchestrator | 2026-02-02 03:18:23.610641 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 03:18:23.610648 | orchestrator | Monday 02 February 2026 03:17:58 +0000 (0:00:00.136) 0:00:14.666 ******* 2026-02-02 03:18:23.610655 | orchestrator | 2026-02-02 03:18:23.610662 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 03:18:23.610669 | orchestrator | Monday 02 February 2026 03:17:58 +0000 (0:00:00.128) 0:00:14.794 ******* 2026-02-02 03:18:23.610675 | orchestrator | 2026-02-02 03:18:23.610681 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 03:18:23.610688 | orchestrator | Monday 02 February 2026 03:17:58 +0000 (0:00:00.132) 0:00:14.926 ******* 2026-02-02 03:18:23.610694 | orchestrator | 2026-02-02 03:18:23.610701 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 03:18:23.610707 | orchestrator | Monday 02 February 2026 03:17:58 +0000 (0:00:00.133) 0:00:15.060 ******* 2026-02-02 03:18:23.610714 | orchestrator | 2026-02-02 03:18:23.610720 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-02 03:18:23.610727 | orchestrator | Monday 02 February 2026 03:17:59 +0000 (0:00:00.132) 0:00:15.193 ******* 2026-02-02 03:18:23.610733 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:18:23.610741 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:18:23.610747 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:18:23.610754 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:18:23.610760 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:18:23.610767 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:18:23.610773 | orchestrator | 2026-02-02 03:18:23.610780 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-02 03:18:23.610787 | orchestrator | Monday 02 February 2026 03:18:08 +0000 (0:00:09.050) 0:00:24.243 ******* 2026-02-02 03:18:23.610794 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:18:23.610807 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:18:23.610813 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:18:23.610818 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:18:23.610824 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:18:23.610829 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:18:23.610834 | orchestrator | 2026-02-02 03:18:23.610840 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-02 03:18:23.610847 | orchestrator | Monday 02 February 2026 03:18:09 +0000 (0:00:01.091) 0:00:25.335 ******* 2026-02-02 03:18:23.610853 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:18:23.610859 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:18:23.610865 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:18:23.610870 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:18:23.610877 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:18:23.610884 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:18:23.610890 | orchestrator | 2026-02-02 03:18:23.610897 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-02 03:18:23.610902 | orchestrator | Monday 02 February 2026 03:18:17 +0000 (0:00:08.016) 0:00:33.351 ******* 2026-02-02 03:18:23.610909 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-02 03:18:23.610916 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-02 03:18:23.610922 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-02 03:18:23.610929 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-02 03:18:23.610936 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-02 03:18:23.610942 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-02 03:18:23.610949 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-02 03:18:23.610971 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-02 03:18:37.074737 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-02 03:18:37.074895 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-02 03:18:37.074917 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-02 03:18:37.074933 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-02 03:18:37.074947 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 03:18:37.074962 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 03:18:37.074974 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 03:18:37.074986 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 03:18:37.075000 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 03:18:37.075013 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 03:18:37.075027 | orchestrator | 2026-02-02 03:18:37.075042 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-02 03:18:37.075075 | orchestrator | Monday 02 February 2026 03:18:23 +0000 (0:00:06.395) 0:00:39.746 ******* 2026-02-02 03:18:37.075091 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-02 03:18:37.075106 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:18:37.075120 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-02 03:18:37.075133 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:18:37.075147 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-02 03:18:37.075160 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:18:37.075173 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-02 03:18:37.075187 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-02 03:18:37.075200 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-02 03:18:37.075214 | orchestrator | 2026-02-02 03:18:37.075228 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-02 03:18:37.075241 | orchestrator | Monday 02 February 2026 03:18:26 +0000 (0:00:02.603) 0:00:42.350 ******* 2026-02-02 03:18:37.075313 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-02 03:18:37.075328 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:18:37.075342 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-02 03:18:37.075356 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:18:37.075369 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-02 03:18:37.075382 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:18:37.075394 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-02 03:18:37.075407 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-02 03:18:37.075440 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-02 03:18:37.075457 | orchestrator | 2026-02-02 03:18:37.075471 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-02 03:18:37.075486 | orchestrator | Monday 02 February 2026 03:18:29 +0000 (0:00:03.146) 0:00:45.497 ******* 2026-02-02 03:18:37.075500 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:18:37.075514 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:18:37.075551 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:18:37.075562 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:18:37.075571 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:18:37.075581 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:18:37.075590 | orchestrator | 2026-02-02 03:18:37.075599 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:18:37.075609 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 03:18:37.075619 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 03:18:37.075627 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 03:18:37.075635 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 03:18:37.075642 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 03:18:37.075650 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 03:18:37.075658 | orchestrator | 2026-02-02 03:18:37.075666 | orchestrator | 2026-02-02 03:18:37.075674 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:18:37.075682 | orchestrator | Monday 02 February 2026 03:18:36 +0000 (0:00:07.303) 0:00:52.800 ******* 2026-02-02 03:18:37.075711 | orchestrator | =============================================================================== 2026-02-02 03:18:37.075719 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.32s 2026-02-02 03:18:37.075727 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.05s 2026-02-02 03:18:37.075735 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.40s 2026-02-02 03:18:37.075743 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.15s 2026-02-02 03:18:37.075751 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.60s 2026-02-02 03:18:37.075759 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.30s 2026-02-02 03:18:37.075767 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.77s 2026-02-02 03:18:37.075775 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.57s 2026-02-02 03:18:37.075782 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.47s 2026-02-02 03:18:37.075791 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.27s 2026-02-02 03:18:37.075799 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.21s 2026-02-02 03:18:37.075807 | orchestrator | module-load : Load modules ---------------------------------------------- 1.20s 2026-02-02 03:18:37.075815 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.09s 2026-02-02 03:18:37.075823 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.01s 2026-02-02 03:18:37.075831 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.99s 2026-02-02 03:18:37.075838 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.78s 2026-02-02 03:18:37.075846 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.68s 2026-02-02 03:18:37.075854 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-02-02 03:18:39.551072 | orchestrator | 2026-02-02 03:18:39 | INFO  | Task 69c10a95-c5dc-45f7-bd51-2aa5ecc7a278 (ovn) was prepared for execution. 2026-02-02 03:18:39.551220 | orchestrator | 2026-02-02 03:18:39 | INFO  | It takes a moment until task 69c10a95-c5dc-45f7-bd51-2aa5ecc7a278 (ovn) has been started and output is visible here. 2026-02-02 03:18:50.540007 | orchestrator | 2026-02-02 03:18:50.540127 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 03:18:50.540144 | orchestrator | 2026-02-02 03:18:50.540156 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 03:18:50.540166 | orchestrator | Monday 02 February 2026 03:18:43 +0000 (0:00:00.186) 0:00:00.186 ******* 2026-02-02 03:18:50.540175 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:18:50.540187 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:18:50.540197 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:18:50.540207 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:18:50.540218 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:18:50.540228 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:18:50.540288 | orchestrator | 2026-02-02 03:18:50.540300 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 03:18:50.540310 | orchestrator | Monday 02 February 2026 03:18:44 +0000 (0:00:00.740) 0:00:00.926 ******* 2026-02-02 03:18:50.540337 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-02 03:18:50.540348 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-02 03:18:50.540359 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-02 03:18:50.540369 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-02 03:18:50.540380 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-02 03:18:50.540390 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-02 03:18:50.540401 | orchestrator | 2026-02-02 03:18:50.540412 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-02 03:18:50.540423 | orchestrator | 2026-02-02 03:18:50.540433 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-02 03:18:50.540444 | orchestrator | Monday 02 February 2026 03:18:45 +0000 (0:00:00.841) 0:00:01.768 ******* 2026-02-02 03:18:50.540456 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:18:50.540467 | orchestrator | 2026-02-02 03:18:50.540476 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-02 03:18:50.540487 | orchestrator | Monday 02 February 2026 03:18:46 +0000 (0:00:01.161) 0:00:02.929 ******* 2026-02-02 03:18:50.540499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540512 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540612 | orchestrator | 2026-02-02 03:18:50.540623 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-02 03:18:50.540633 | orchestrator | Monday 02 February 2026 03:18:47 +0000 (0:00:01.187) 0:00:04.117 ******* 2026-02-02 03:18:50.540660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540668 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540709 | orchestrator | 2026-02-02 03:18:50.540716 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-02 03:18:50.540723 | orchestrator | Monday 02 February 2026 03:18:49 +0000 (0:00:01.519) 0:00:05.636 ******* 2026-02-02 03:18:50.540731 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540738 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:18:50.540751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497197 | orchestrator | 2026-02-02 03:19:13.497215 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-02 03:19:13.497298 | orchestrator | Monday 02 February 2026 03:18:50 +0000 (0:00:01.184) 0:00:06.820 ******* 2026-02-02 03:19:13.497310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497354 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497414 | orchestrator | 2026-02-02 03:19:13.497424 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-02 03:19:13.497434 | orchestrator | Monday 02 February 2026 03:18:52 +0000 (0:00:01.498) 0:00:08.318 ******* 2026-02-02 03:19:13.497452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:19:13.497519 | orchestrator | 2026-02-02 03:19:13.497529 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-02 03:19:13.497539 | orchestrator | Monday 02 February 2026 03:18:53 +0000 (0:00:01.440) 0:00:09.758 ******* 2026-02-02 03:19:13.497549 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:19:13.497560 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:19:13.497570 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:19:13.497580 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:19:13.497590 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:19:13.497599 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:19:13.497609 | orchestrator | 2026-02-02 03:19:13.497618 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-02 03:19:13.497628 | orchestrator | Monday 02 February 2026 03:18:55 +0000 (0:00:02.309) 0:00:12.068 ******* 2026-02-02 03:19:13.497638 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-02 03:19:13.497656 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-02 03:19:13.497671 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-02 03:19:13.497685 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-02 03:19:13.497701 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-02 03:19:13.497717 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-02 03:19:13.497740 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 03:19:53.575524 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 03:19:53.575636 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 03:19:53.575669 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 03:19:53.575682 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 03:19:53.575693 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 03:19:53.575705 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-02 03:19:53.575718 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-02 03:19:53.575752 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-02 03:19:53.575764 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-02 03:19:53.575775 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-02 03:19:53.575786 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-02 03:19:53.575798 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 03:19:53.575811 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 03:19:53.575821 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 03:19:53.575832 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 03:19:53.575843 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 03:19:53.575854 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 03:19:53.575865 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 03:19:53.575876 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 03:19:53.575887 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 03:19:53.575897 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 03:19:53.575908 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 03:19:53.575918 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 03:19:53.575929 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 03:19:53.575940 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 03:19:53.575951 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 03:19:53.575961 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 03:19:53.575972 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 03:19:53.575982 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 03:19:53.575993 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-02 03:19:53.576004 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-02 03:19:53.576013 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-02 03:19:53.576022 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-02 03:19:53.576031 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-02 03:19:53.576042 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-02 03:19:53.576052 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-02 03:19:53.576090 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-02 03:19:53.576129 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-02 03:19:53.576148 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-02 03:19:53.576160 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-02 03:19:53.576173 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-02 03:19:53.576204 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-02 03:19:53.576216 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-02 03:19:53.576228 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-02 03:19:53.576239 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-02 03:19:53.576251 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-02 03:19:53.576264 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-02 03:19:53.576275 | orchestrator | 2026-02-02 03:19:53.576287 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 03:19:53.576299 | orchestrator | Monday 02 February 2026 03:19:12 +0000 (0:00:17.119) 0:00:29.187 ******* 2026-02-02 03:19:53.576312 | orchestrator | 2026-02-02 03:19:53.576324 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 03:19:53.576335 | orchestrator | Monday 02 February 2026 03:19:13 +0000 (0:00:00.236) 0:00:29.424 ******* 2026-02-02 03:19:53.576347 | orchestrator | 2026-02-02 03:19:53.576358 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 03:19:53.576370 | orchestrator | Monday 02 February 2026 03:19:13 +0000 (0:00:00.068) 0:00:29.492 ******* 2026-02-02 03:19:53.576383 | orchestrator | 2026-02-02 03:19:53.576395 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 03:19:53.576406 | orchestrator | Monday 02 February 2026 03:19:13 +0000 (0:00:00.074) 0:00:29.566 ******* 2026-02-02 03:19:53.576418 | orchestrator | 2026-02-02 03:19:53.576430 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 03:19:53.576442 | orchestrator | Monday 02 February 2026 03:19:13 +0000 (0:00:00.073) 0:00:29.640 ******* 2026-02-02 03:19:53.576453 | orchestrator | 2026-02-02 03:19:53.576464 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 03:19:53.576475 | orchestrator | Monday 02 February 2026 03:19:13 +0000 (0:00:00.065) 0:00:29.705 ******* 2026-02-02 03:19:53.576485 | orchestrator | 2026-02-02 03:19:53.576496 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-02 03:19:53.576507 | orchestrator | Monday 02 February 2026 03:19:13 +0000 (0:00:00.064) 0:00:29.769 ******* 2026-02-02 03:19:53.576519 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:19:53.576531 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:19:53.576542 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:19:53.576553 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:19:53.576563 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:19:53.576574 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:19:53.576585 | orchestrator | 2026-02-02 03:19:53.576596 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-02 03:19:53.576607 | orchestrator | Monday 02 February 2026 03:19:15 +0000 (0:00:01.534) 0:00:31.304 ******* 2026-02-02 03:19:53.576626 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:19:53.576638 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:19:53.576648 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:19:53.576659 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:19:53.576670 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:19:53.576680 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:19:53.576691 | orchestrator | 2026-02-02 03:19:53.576702 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-02 03:19:53.576713 | orchestrator | 2026-02-02 03:19:53.576724 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-02 03:19:53.576735 | orchestrator | Monday 02 February 2026 03:19:51 +0000 (0:00:36.355) 0:01:07.660 ******* 2026-02-02 03:19:53.576746 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:19:53.576757 | orchestrator | 2026-02-02 03:19:53.576768 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-02 03:19:53.576778 | orchestrator | Monday 02 February 2026 03:19:52 +0000 (0:00:00.732) 0:01:08.392 ******* 2026-02-02 03:19:53.576789 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:19:53.576800 | orchestrator | 2026-02-02 03:19:53.576811 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-02 03:19:53.576822 | orchestrator | Monday 02 February 2026 03:19:52 +0000 (0:00:00.556) 0:01:08.948 ******* 2026-02-02 03:19:53.576833 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:19:53.576843 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:19:53.576854 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:19:53.576866 | orchestrator | 2026-02-02 03:19:53.576877 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-02 03:19:53.576897 | orchestrator | Monday 02 February 2026 03:19:53 +0000 (0:00:00.906) 0:01:09.855 ******* 2026-02-02 03:20:05.334290 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:20:05.334369 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:20:05.334376 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:20:05.334380 | orchestrator | 2026-02-02 03:20:05.334386 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-02 03:20:05.334403 | orchestrator | Monday 02 February 2026 03:19:53 +0000 (0:00:00.345) 0:01:10.201 ******* 2026-02-02 03:20:05.334408 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:20:05.334412 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:20:05.334416 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:20:05.334420 | orchestrator | 2026-02-02 03:20:05.334424 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-02 03:20:05.334428 | orchestrator | Monday 02 February 2026 03:19:54 +0000 (0:00:00.350) 0:01:10.551 ******* 2026-02-02 03:20:05.334431 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:20:05.334435 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:20:05.334439 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:20:05.334443 | orchestrator | 2026-02-02 03:20:05.334447 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-02 03:20:05.334451 | orchestrator | Monday 02 February 2026 03:19:54 +0000 (0:00:00.339) 0:01:10.891 ******* 2026-02-02 03:20:05.334454 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:20:05.334458 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:20:05.334462 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:20:05.334465 | orchestrator | 2026-02-02 03:20:05.334469 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-02 03:20:05.334473 | orchestrator | Monday 02 February 2026 03:19:55 +0000 (0:00:00.557) 0:01:11.448 ******* 2026-02-02 03:20:05.334477 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334482 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334485 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334489 | orchestrator | 2026-02-02 03:20:05.334493 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-02 03:20:05.334511 | orchestrator | Monday 02 February 2026 03:19:55 +0000 (0:00:00.304) 0:01:11.753 ******* 2026-02-02 03:20:05.334515 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334518 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334522 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334526 | orchestrator | 2026-02-02 03:20:05.334530 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-02 03:20:05.334533 | orchestrator | Monday 02 February 2026 03:19:55 +0000 (0:00:00.361) 0:01:12.114 ******* 2026-02-02 03:20:05.334537 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334541 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334545 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334548 | orchestrator | 2026-02-02 03:20:05.334552 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-02 03:20:05.334556 | orchestrator | Monday 02 February 2026 03:19:56 +0000 (0:00:00.297) 0:01:12.411 ******* 2026-02-02 03:20:05.334560 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334563 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334567 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334571 | orchestrator | 2026-02-02 03:20:05.334575 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-02 03:20:05.334578 | orchestrator | Monday 02 February 2026 03:19:56 +0000 (0:00:00.343) 0:01:12.755 ******* 2026-02-02 03:20:05.334582 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334586 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334590 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334594 | orchestrator | 2026-02-02 03:20:05.334598 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-02 03:20:05.334601 | orchestrator | Monday 02 February 2026 03:19:57 +0000 (0:00:00.542) 0:01:13.298 ******* 2026-02-02 03:20:05.334605 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334609 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334613 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334616 | orchestrator | 2026-02-02 03:20:05.334620 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-02 03:20:05.334624 | orchestrator | Monday 02 February 2026 03:19:57 +0000 (0:00:00.352) 0:01:13.651 ******* 2026-02-02 03:20:05.334638 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334642 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334646 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334649 | orchestrator | 2026-02-02 03:20:05.334653 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-02 03:20:05.334657 | orchestrator | Monday 02 February 2026 03:19:57 +0000 (0:00:00.367) 0:01:14.018 ******* 2026-02-02 03:20:05.334661 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334665 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334668 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334672 | orchestrator | 2026-02-02 03:20:05.334676 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-02 03:20:05.334680 | orchestrator | Monday 02 February 2026 03:19:58 +0000 (0:00:00.339) 0:01:14.358 ******* 2026-02-02 03:20:05.334683 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334687 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334691 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334695 | orchestrator | 2026-02-02 03:20:05.334698 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-02 03:20:05.334702 | orchestrator | Monday 02 February 2026 03:19:58 +0000 (0:00:00.546) 0:01:14.905 ******* 2026-02-02 03:20:05.334706 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334710 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334714 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334718 | orchestrator | 2026-02-02 03:20:05.334722 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-02 03:20:05.334729 | orchestrator | Monday 02 February 2026 03:19:58 +0000 (0:00:00.336) 0:01:15.242 ******* 2026-02-02 03:20:05.334733 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334737 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334741 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334744 | orchestrator | 2026-02-02 03:20:05.334748 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-02 03:20:05.334752 | orchestrator | Monday 02 February 2026 03:19:59 +0000 (0:00:00.301) 0:01:15.543 ******* 2026-02-02 03:20:05.334765 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334769 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334773 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334777 | orchestrator | 2026-02-02 03:20:05.334780 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-02 03:20:05.334787 | orchestrator | Monday 02 February 2026 03:19:59 +0000 (0:00:00.303) 0:01:15.847 ******* 2026-02-02 03:20:05.334791 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:20:05.334796 | orchestrator | 2026-02-02 03:20:05.334799 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-02 03:20:05.334803 | orchestrator | Monday 02 February 2026 03:20:00 +0000 (0:00:00.792) 0:01:16.640 ******* 2026-02-02 03:20:05.334807 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:20:05.334811 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:20:05.334814 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:20:05.334818 | orchestrator | 2026-02-02 03:20:05.334822 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-02 03:20:05.334827 | orchestrator | Monday 02 February 2026 03:20:00 +0000 (0:00:00.487) 0:01:17.127 ******* 2026-02-02 03:20:05.334831 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:20:05.334835 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:20:05.334840 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:20:05.334844 | orchestrator | 2026-02-02 03:20:05.334849 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-02 03:20:05.334853 | orchestrator | Monday 02 February 2026 03:20:01 +0000 (0:00:00.461) 0:01:17.589 ******* 2026-02-02 03:20:05.334857 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334862 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334867 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334871 | orchestrator | 2026-02-02 03:20:05.334876 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-02 03:20:05.334880 | orchestrator | Monday 02 February 2026 03:20:01 +0000 (0:00:00.349) 0:01:17.939 ******* 2026-02-02 03:20:05.334885 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334889 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334894 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334898 | orchestrator | 2026-02-02 03:20:05.334903 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-02 03:20:05.334908 | orchestrator | Monday 02 February 2026 03:20:02 +0000 (0:00:00.631) 0:01:18.571 ******* 2026-02-02 03:20:05.334912 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334917 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334921 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334926 | orchestrator | 2026-02-02 03:20:05.334930 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-02 03:20:05.334935 | orchestrator | Monday 02 February 2026 03:20:02 +0000 (0:00:00.339) 0:01:18.910 ******* 2026-02-02 03:20:05.334940 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334944 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334949 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334953 | orchestrator | 2026-02-02 03:20:05.334958 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-02 03:20:05.334962 | orchestrator | Monday 02 February 2026 03:20:02 +0000 (0:00:00.364) 0:01:19.275 ******* 2026-02-02 03:20:05.334972 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.334977 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.334982 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.334986 | orchestrator | 2026-02-02 03:20:05.334991 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-02 03:20:05.334995 | orchestrator | Monday 02 February 2026 03:20:03 +0000 (0:00:00.334) 0:01:19.610 ******* 2026-02-02 03:20:05.335000 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:05.335004 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:05.335008 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:05.335013 | orchestrator | 2026-02-02 03:20:05.335017 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-02 03:20:05.335022 | orchestrator | Monday 02 February 2026 03:20:03 +0000 (0:00:00.573) 0:01:20.184 ******* 2026-02-02 03:20:05.335028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:05.335034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:05.335039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:05.335051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466496 | orchestrator | 2026-02-02 03:20:11.466508 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-02 03:20:11.466518 | orchestrator | Monday 02 February 2026 03:20:05 +0000 (0:00:01.428) 0:01:21.612 ******* 2026-02-02 03:20:11.466529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466740 | orchestrator | 2026-02-02 03:20:11.466757 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-02 03:20:11.466777 | orchestrator | Monday 02 February 2026 03:20:09 +0000 (0:00:03.730) 0:01:25.342 ******* 2026-02-02 03:20:11.466796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:11.466910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:35.781003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:35.781136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:35.781147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:35.781225 | orchestrator | 2026-02-02 03:20:35.781233 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 03:20:35.781242 | orchestrator | Monday 02 February 2026 03:20:11 +0000 (0:00:01.981) 0:01:27.323 ******* 2026-02-02 03:20:35.781248 | orchestrator | 2026-02-02 03:20:35.781255 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 03:20:35.781262 | orchestrator | Monday 02 February 2026 03:20:11 +0000 (0:00:00.065) 0:01:27.389 ******* 2026-02-02 03:20:35.781268 | orchestrator | 2026-02-02 03:20:35.781275 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 03:20:35.781282 | orchestrator | Monday 02 February 2026 03:20:11 +0000 (0:00:00.066) 0:01:27.455 ******* 2026-02-02 03:20:35.781289 | orchestrator | 2026-02-02 03:20:35.781295 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-02 03:20:35.781302 | orchestrator | Monday 02 February 2026 03:20:11 +0000 (0:00:00.282) 0:01:27.738 ******* 2026-02-02 03:20:35.781309 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:20:35.781316 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:20:35.781322 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:20:35.781329 | orchestrator | 2026-02-02 03:20:35.781336 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-02 03:20:35.781343 | orchestrator | Monday 02 February 2026 03:20:13 +0000 (0:00:02.443) 0:01:30.181 ******* 2026-02-02 03:20:35.781349 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:20:35.781355 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:20:35.781361 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:20:35.781368 | orchestrator | 2026-02-02 03:20:35.781375 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-02 03:20:35.781382 | orchestrator | Monday 02 February 2026 03:20:21 +0000 (0:00:07.543) 0:01:37.725 ******* 2026-02-02 03:20:35.781388 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:20:35.781394 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:20:35.781401 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:20:35.781407 | orchestrator | 2026-02-02 03:20:35.781415 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-02 03:20:35.781421 | orchestrator | Monday 02 February 2026 03:20:29 +0000 (0:00:07.592) 0:01:45.318 ******* 2026-02-02 03:20:35.781428 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:20:35.781434 | orchestrator | 2026-02-02 03:20:35.781441 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-02 03:20:35.781448 | orchestrator | Monday 02 February 2026 03:20:29 +0000 (0:00:00.154) 0:01:45.472 ******* 2026-02-02 03:20:35.781454 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:20:35.781462 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:20:35.781469 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:20:35.781475 | orchestrator | 2026-02-02 03:20:35.781482 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-02 03:20:35.781489 | orchestrator | Monday 02 February 2026 03:20:30 +0000 (0:00:01.015) 0:01:46.487 ******* 2026-02-02 03:20:35.781496 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:35.781510 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:35.781517 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:20:35.781523 | orchestrator | 2026-02-02 03:20:35.781530 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-02 03:20:35.781537 | orchestrator | Monday 02 February 2026 03:20:30 +0000 (0:00:00.679) 0:01:47.167 ******* 2026-02-02 03:20:35.781543 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:20:35.781549 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:20:35.781557 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:20:35.781567 | orchestrator | 2026-02-02 03:20:35.781573 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-02 03:20:35.781597 | orchestrator | Monday 02 February 2026 03:20:31 +0000 (0:00:00.797) 0:01:47.965 ******* 2026-02-02 03:20:35.781605 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:20:35.781612 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:20:35.781620 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:20:35.781626 | orchestrator | 2026-02-02 03:20:35.781632 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-02 03:20:35.781640 | orchestrator | Monday 02 February 2026 03:20:32 +0000 (0:00:00.622) 0:01:48.587 ******* 2026-02-02 03:20:35.781646 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:20:35.781652 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:20:35.781675 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:20:35.781681 | orchestrator | 2026-02-02 03:20:35.781687 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-02 03:20:35.781693 | orchestrator | Monday 02 February 2026 03:20:33 +0000 (0:00:00.731) 0:01:49.319 ******* 2026-02-02 03:20:35.781699 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:20:35.781706 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:20:35.781712 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:20:35.781718 | orchestrator | 2026-02-02 03:20:35.781725 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-02 03:20:35.781731 | orchestrator | Monday 02 February 2026 03:20:34 +0000 (0:00:01.005) 0:01:50.324 ******* 2026-02-02 03:20:35.781737 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:20:35.781743 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:20:35.781749 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:20:35.781755 | orchestrator | 2026-02-02 03:20:35.781760 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-02 03:20:35.781766 | orchestrator | Monday 02 February 2026 03:20:34 +0000 (0:00:00.312) 0:01:50.637 ******* 2026-02-02 03:20:35.781774 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:35.781782 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:35.781789 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:35.781796 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:35.781810 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:35.781816 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:35.781822 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:35.781833 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:35.781848 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.830641 | orchestrator | 2026-02-02 03:20:42.830860 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-02 03:20:42.830932 | orchestrator | Monday 02 February 2026 03:20:35 +0000 (0:00:01.421) 0:01:52.058 ******* 2026-02-02 03:20:42.830951 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.830966 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.830977 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.830989 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831055 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831105 | orchestrator | 2026-02-02 03:20:42.831116 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-02 03:20:42.831127 | orchestrator | Monday 02 February 2026 03:20:39 +0000 (0:00:03.766) 0:01:55.825 ******* 2026-02-02 03:20:42.831192 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831208 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831222 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831257 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831297 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 03:20:42.831330 | orchestrator | 2026-02-02 03:20:42.831341 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 03:20:42.831352 | orchestrator | Monday 02 February 2026 03:20:42 +0000 (0:00:02.983) 0:01:58.809 ******* 2026-02-02 03:20:42.831363 | orchestrator | 2026-02-02 03:20:42.831374 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 03:20:42.831385 | orchestrator | Monday 02 February 2026 03:20:42 +0000 (0:00:00.075) 0:01:58.884 ******* 2026-02-02 03:20:42.831396 | orchestrator | 2026-02-02 03:20:42.831407 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 03:20:42.831418 | orchestrator | Monday 02 February 2026 03:20:42 +0000 (0:00:00.087) 0:01:58.972 ******* 2026-02-02 03:20:42.831429 | orchestrator | 2026-02-02 03:20:42.831448 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-02 03:21:07.096211 | orchestrator | Monday 02 February 2026 03:20:42 +0000 (0:00:00.116) 0:01:59.088 ******* 2026-02-02 03:21:07.096311 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:21:07.096319 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:21:07.096325 | orchestrator | 2026-02-02 03:21:07.096333 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-02 03:21:07.096340 | orchestrator | Monday 02 February 2026 03:20:49 +0000 (0:00:06.206) 0:02:05.295 ******* 2026-02-02 03:21:07.096358 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:21:07.096390 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:21:07.096396 | orchestrator | 2026-02-02 03:21:07.096400 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-02 03:21:07.096418 | orchestrator | Monday 02 February 2026 03:20:55 +0000 (0:00:06.161) 0:02:11.456 ******* 2026-02-02 03:21:07.096423 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:21:07.096427 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:21:07.096430 | orchestrator | 2026-02-02 03:21:07.096435 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-02 03:21:07.096439 | orchestrator | Monday 02 February 2026 03:21:01 +0000 (0:00:06.204) 0:02:17.661 ******* 2026-02-02 03:21:07.096443 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:21:07.096448 | orchestrator | 2026-02-02 03:21:07.096452 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-02 03:21:07.096456 | orchestrator | Monday 02 February 2026 03:21:01 +0000 (0:00:00.131) 0:02:17.793 ******* 2026-02-02 03:21:07.096460 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:21:07.096465 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:21:07.096469 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:21:07.096473 | orchestrator | 2026-02-02 03:21:07.096477 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-02 03:21:07.096481 | orchestrator | Monday 02 February 2026 03:21:02 +0000 (0:00:01.041) 0:02:18.834 ******* 2026-02-02 03:21:07.096485 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:21:07.096489 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:21:07.096493 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:21:07.096497 | orchestrator | 2026-02-02 03:21:07.096501 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-02 03:21:07.096505 | orchestrator | Monday 02 February 2026 03:21:03 +0000 (0:00:00.624) 0:02:19.459 ******* 2026-02-02 03:21:07.096509 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:21:07.096513 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:21:07.096517 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:21:07.096521 | orchestrator | 2026-02-02 03:21:07.096524 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-02 03:21:07.096528 | orchestrator | Monday 02 February 2026 03:21:03 +0000 (0:00:00.807) 0:02:20.267 ******* 2026-02-02 03:21:07.096532 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:21:07.096536 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:21:07.096540 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:21:07.096544 | orchestrator | 2026-02-02 03:21:07.096548 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-02 03:21:07.096552 | orchestrator | Monday 02 February 2026 03:21:04 +0000 (0:00:00.613) 0:02:20.880 ******* 2026-02-02 03:21:07.096556 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:21:07.096559 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:21:07.096563 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:21:07.096567 | orchestrator | 2026-02-02 03:21:07.096571 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-02 03:21:07.096575 | orchestrator | Monday 02 February 2026 03:21:05 +0000 (0:00:01.205) 0:02:22.086 ******* 2026-02-02 03:21:07.096579 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:21:07.096583 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:21:07.096586 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:21:07.096590 | orchestrator | 2026-02-02 03:21:07.096594 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:21:07.096599 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-02 03:21:07.096603 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-02 03:21:07.096607 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-02 03:21:07.096612 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:21:07.096620 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:21:07.096624 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:21:07.096628 | orchestrator | 2026-02-02 03:21:07.096632 | orchestrator | 2026-02-02 03:21:07.096647 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:21:07.096651 | orchestrator | Monday 02 February 2026 03:21:06 +0000 (0:00:00.894) 0:02:22.981 ******* 2026-02-02 03:21:07.096655 | orchestrator | =============================================================================== 2026-02-02 03:21:07.096659 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 36.36s 2026-02-02 03:21:07.096663 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.12s 2026-02-02 03:21:07.096667 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.80s 2026-02-02 03:21:07.096671 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.71s 2026-02-02 03:21:07.096674 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.65s 2026-02-02 03:21:07.096689 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.77s 2026-02-02 03:21:07.096694 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.73s 2026-02-02 03:21:07.096697 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.98s 2026-02-02 03:21:07.096701 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.31s 2026-02-02 03:21:07.096705 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.98s 2026-02-02 03:21:07.096709 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.53s 2026-02-02 03:21:07.096713 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.52s 2026-02-02 03:21:07.096716 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.50s 2026-02-02 03:21:07.096720 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.44s 2026-02-02 03:21:07.096724 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2026-02-02 03:21:07.096728 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.42s 2026-02-02 03:21:07.096732 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.21s 2026-02-02 03:21:07.096735 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.19s 2026-02-02 03:21:07.096739 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.18s 2026-02-02 03:21:07.096743 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.16s 2026-02-02 03:21:07.464418 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-02 03:21:07.464506 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-02 03:21:09.705522 | orchestrator | 2026-02-02 03:21:09 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-02 03:21:19.822724 | orchestrator | 2026-02-02 03:21:19 | INFO  | Task 4e145ad2-7789-41a4-9bbc-6740c7d2ce42 (wipe-partitions) was prepared for execution. 2026-02-02 03:21:19.822823 | orchestrator | 2026-02-02 03:21:19 | INFO  | It takes a moment until task 4e145ad2-7789-41a4-9bbc-6740c7d2ce42 (wipe-partitions) has been started and output is visible here. 2026-02-02 03:21:32.724629 | orchestrator | 2026-02-02 03:21:32.724759 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-02 03:21:32.724778 | orchestrator | 2026-02-02 03:21:32.724790 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-02 03:21:32.724802 | orchestrator | Monday 02 February 2026 03:21:24 +0000 (0:00:00.134) 0:00:00.134 ******* 2026-02-02 03:21:32.724840 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:21:32.724853 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:21:32.724865 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:21:32.724882 | orchestrator | 2026-02-02 03:21:32.724901 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-02 03:21:32.724921 | orchestrator | Monday 02 February 2026 03:21:24 +0000 (0:00:00.596) 0:00:00.731 ******* 2026-02-02 03:21:32.724941 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:21:32.724962 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:21:32.724982 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:21:32.725002 | orchestrator | 2026-02-02 03:21:32.725024 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-02 03:21:32.725042 | orchestrator | Monday 02 February 2026 03:21:25 +0000 (0:00:00.428) 0:00:01.159 ******* 2026-02-02 03:21:32.725054 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:21:32.725066 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:21:32.725076 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:21:32.725087 | orchestrator | 2026-02-02 03:21:32.725098 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-02 03:21:32.725137 | orchestrator | Monday 02 February 2026 03:21:25 +0000 (0:00:00.602) 0:00:01.762 ******* 2026-02-02 03:21:32.725151 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:21:32.725164 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:21:32.725178 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:21:32.725191 | orchestrator | 2026-02-02 03:21:32.725203 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-02 03:21:32.725217 | orchestrator | Monday 02 February 2026 03:21:26 +0000 (0:00:00.279) 0:00:02.041 ******* 2026-02-02 03:21:32.725229 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-02 03:21:32.725242 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-02 03:21:32.725255 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-02 03:21:32.725268 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-02 03:21:32.725281 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-02 03:21:32.725300 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-02 03:21:32.725338 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-02 03:21:32.725357 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-02 03:21:32.725376 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-02 03:21:32.725390 | orchestrator | 2026-02-02 03:21:32.725401 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-02 03:21:32.725413 | orchestrator | Monday 02 February 2026 03:21:27 +0000 (0:00:01.256) 0:00:03.297 ******* 2026-02-02 03:21:32.725424 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-02 03:21:32.725436 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-02 03:21:32.725455 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-02 03:21:32.725504 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-02 03:21:32.725540 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-02 03:21:32.725559 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-02 03:21:32.725577 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-02 03:21:32.725596 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-02 03:21:32.725615 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-02 03:21:32.725634 | orchestrator | 2026-02-02 03:21:32.725652 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-02 03:21:32.725670 | orchestrator | Monday 02 February 2026 03:21:29 +0000 (0:00:01.593) 0:00:04.891 ******* 2026-02-02 03:21:32.725682 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-02 03:21:32.725693 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-02 03:21:32.725704 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-02 03:21:32.725714 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-02 03:21:32.725739 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-02 03:21:32.725750 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-02 03:21:32.725761 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-02 03:21:32.725772 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-02 03:21:32.725783 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-02 03:21:32.725793 | orchestrator | 2026-02-02 03:21:32.725804 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-02 03:21:32.725815 | orchestrator | Monday 02 February 2026 03:21:31 +0000 (0:00:02.042) 0:00:06.934 ******* 2026-02-02 03:21:32.725826 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:21:32.725837 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:21:32.725848 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:21:32.725858 | orchestrator | 2026-02-02 03:21:32.725869 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-02 03:21:32.725880 | orchestrator | Monday 02 February 2026 03:21:31 +0000 (0:00:00.625) 0:00:07.559 ******* 2026-02-02 03:21:32.725890 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:21:32.725901 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:21:32.725912 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:21:32.725923 | orchestrator | 2026-02-02 03:21:32.725941 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:21:32.725960 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:21:32.725980 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:21:32.726090 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:21:32.726139 | orchestrator | 2026-02-02 03:21:32.726153 | orchestrator | 2026-02-02 03:21:32.726164 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:21:32.726175 | orchestrator | Monday 02 February 2026 03:21:32 +0000 (0:00:00.645) 0:00:08.205 ******* 2026-02-02 03:21:32.726186 | orchestrator | =============================================================================== 2026-02-02 03:21:32.726196 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.04s 2026-02-02 03:21:32.726207 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.59s 2026-02-02 03:21:32.726218 | orchestrator | Check device availability ----------------------------------------------- 1.26s 2026-02-02 03:21:32.726229 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2026-02-02 03:21:32.726239 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-02-02 03:21:32.726250 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2026-02-02 03:21:32.726261 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2026-02-02 03:21:32.726271 | orchestrator | Remove all rook related logical devices --------------------------------- 0.43s 2026-02-02 03:21:32.726282 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2026-02-02 03:21:45.426634 | orchestrator | 2026-02-02 03:21:45 | INFO  | Task a9fd78cb-84d3-4a3d-aa02-42ff4503dfec (facts) was prepared for execution. 2026-02-02 03:21:45.426769 | orchestrator | 2026-02-02 03:21:45 | INFO  | It takes a moment until task a9fd78cb-84d3-4a3d-aa02-42ff4503dfec (facts) has been started and output is visible here. 2026-02-02 03:21:58.682518 | orchestrator | 2026-02-02 03:21:58.682615 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-02 03:21:58.682629 | orchestrator | 2026-02-02 03:21:58.682639 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-02 03:21:58.682648 | orchestrator | Monday 02 February 2026 03:21:49 +0000 (0:00:00.280) 0:00:00.280 ******* 2026-02-02 03:21:58.682678 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:21:58.682688 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:21:58.682696 | orchestrator | ok: [testbed-manager] 2026-02-02 03:21:58.682704 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:21:58.682712 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:21:58.682720 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:21:58.682728 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:21:58.682736 | orchestrator | 2026-02-02 03:21:58.682744 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-02 03:21:58.682752 | orchestrator | Monday 02 February 2026 03:21:51 +0000 (0:00:01.168) 0:00:01.449 ******* 2026-02-02 03:21:58.682761 | orchestrator | skipping: [testbed-manager] 2026-02-02 03:21:58.682770 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:21:58.682807 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:21:58.682816 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:21:58.682824 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:21:58.682831 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:21:58.682839 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:21:58.682847 | orchestrator | 2026-02-02 03:21:58.682855 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-02 03:21:58.682863 | orchestrator | 2026-02-02 03:21:58.682871 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 03:21:58.682879 | orchestrator | Monday 02 February 2026 03:21:52 +0000 (0:00:01.324) 0:00:02.773 ******* 2026-02-02 03:21:58.682887 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:21:58.682895 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:21:58.682903 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:21:58.682916 | orchestrator | ok: [testbed-manager] 2026-02-02 03:21:58.682930 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:21:58.682942 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:21:58.682955 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:21:58.682968 | orchestrator | 2026-02-02 03:21:58.682980 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-02 03:21:58.682992 | orchestrator | 2026-02-02 03:21:58.683004 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-02 03:21:58.683019 | orchestrator | Monday 02 February 2026 03:21:57 +0000 (0:00:05.099) 0:00:07.872 ******* 2026-02-02 03:21:58.683031 | orchestrator | skipping: [testbed-manager] 2026-02-02 03:21:58.683046 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:21:58.683060 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:21:58.683072 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:21:58.683107 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:21:58.683122 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:21:58.683135 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:21:58.683149 | orchestrator | 2026-02-02 03:21:58.683162 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:21:58.683177 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:21:58.683238 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:21:58.683256 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:21:58.683269 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:21:58.683282 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:21:58.683296 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:21:58.683323 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:21:58.683337 | orchestrator | 2026-02-02 03:21:58.683350 | orchestrator | 2026-02-02 03:21:58.683363 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:21:58.683377 | orchestrator | Monday 02 February 2026 03:21:58 +0000 (0:00:00.592) 0:00:08.465 ******* 2026-02-02 03:21:58.683390 | orchestrator | =============================================================================== 2026-02-02 03:21:58.683403 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.10s 2026-02-02 03:21:58.683416 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2026-02-02 03:21:58.683429 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2026-02-02 03:21:58.683441 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-02-02 03:22:01.319405 | orchestrator | 2026-02-02 03:22:01 | INFO  | Task 03b9c451-6a8e-4234-bdaf-c020872d6415 (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-02 03:22:01.319655 | orchestrator | 2026-02-02 03:22:01 | INFO  | It takes a moment until task 03b9c451-6a8e-4234-bdaf-c020872d6415 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-02 03:22:14.282258 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-02 03:22:14.282374 | orchestrator | 2.16.14 2026-02-02 03:22:14.282392 | orchestrator | 2026-02-02 03:22:14.282405 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-02 03:22:14.282417 | orchestrator | 2026-02-02 03:22:14.282427 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-02 03:22:14.282438 | orchestrator | Monday 02 February 2026 03:22:06 +0000 (0:00:00.402) 0:00:00.402 ******* 2026-02-02 03:22:14.282450 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 03:22:14.282461 | orchestrator | 2026-02-02 03:22:14.282487 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-02 03:22:14.282499 | orchestrator | Monday 02 February 2026 03:22:06 +0000 (0:00:00.262) 0:00:00.665 ******* 2026-02-02 03:22:14.282509 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:22:14.282520 | orchestrator | 2026-02-02 03:22:14.282530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.282541 | orchestrator | Monday 02 February 2026 03:22:06 +0000 (0:00:00.241) 0:00:00.906 ******* 2026-02-02 03:22:14.282552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-02 03:22:14.282563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-02 03:22:14.282575 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-02 03:22:14.282585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-02 03:22:14.282595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-02 03:22:14.282606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-02 03:22:14.282616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-02 03:22:14.282627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-02 03:22:14.282636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-02 03:22:14.282647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-02 03:22:14.282657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-02 03:22:14.282668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-02 03:22:14.282701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-02 03:22:14.282713 | orchestrator | 2026-02-02 03:22:14.282724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.282734 | orchestrator | Monday 02 February 2026 03:22:07 +0000 (0:00:00.511) 0:00:01.418 ******* 2026-02-02 03:22:14.282744 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.282755 | orchestrator | 2026-02-02 03:22:14.282765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.282776 | orchestrator | Monday 02 February 2026 03:22:07 +0000 (0:00:00.218) 0:00:01.636 ******* 2026-02-02 03:22:14.282786 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.282796 | orchestrator | 2026-02-02 03:22:14.282808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.282815 | orchestrator | Monday 02 February 2026 03:22:07 +0000 (0:00:00.237) 0:00:01.873 ******* 2026-02-02 03:22:14.282823 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.282830 | orchestrator | 2026-02-02 03:22:14.282837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.282845 | orchestrator | Monday 02 February 2026 03:22:08 +0000 (0:00:00.225) 0:00:02.099 ******* 2026-02-02 03:22:14.282852 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.282859 | orchestrator | 2026-02-02 03:22:14.282866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.282873 | orchestrator | Monday 02 February 2026 03:22:08 +0000 (0:00:00.197) 0:00:02.297 ******* 2026-02-02 03:22:14.282881 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.282888 | orchestrator | 2026-02-02 03:22:14.282895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.282902 | orchestrator | Monday 02 February 2026 03:22:08 +0000 (0:00:00.232) 0:00:02.529 ******* 2026-02-02 03:22:14.282909 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.282917 | orchestrator | 2026-02-02 03:22:14.282924 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.282931 | orchestrator | Monday 02 February 2026 03:22:08 +0000 (0:00:00.212) 0:00:02.742 ******* 2026-02-02 03:22:14.282939 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.282946 | orchestrator | 2026-02-02 03:22:14.282953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.282960 | orchestrator | Monday 02 February 2026 03:22:08 +0000 (0:00:00.226) 0:00:02.968 ******* 2026-02-02 03:22:14.282968 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.282975 | orchestrator | 2026-02-02 03:22:14.282982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.282989 | orchestrator | Monday 02 February 2026 03:22:09 +0000 (0:00:00.225) 0:00:03.194 ******* 2026-02-02 03:22:14.282997 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58) 2026-02-02 03:22:14.283005 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58) 2026-02-02 03:22:14.283012 | orchestrator | 2026-02-02 03:22:14.283020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.283043 | orchestrator | Monday 02 February 2026 03:22:09 +0000 (0:00:00.466) 0:00:03.660 ******* 2026-02-02 03:22:14.283050 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4) 2026-02-02 03:22:14.283058 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4) 2026-02-02 03:22:14.283065 | orchestrator | 2026-02-02 03:22:14.283096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.283109 | orchestrator | Monday 02 February 2026 03:22:10 +0000 (0:00:00.682) 0:00:04.343 ******* 2026-02-02 03:22:14.283126 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc) 2026-02-02 03:22:14.283145 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc) 2026-02-02 03:22:14.283156 | orchestrator | 2026-02-02 03:22:14.283168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.283177 | orchestrator | Monday 02 February 2026 03:22:10 +0000 (0:00:00.685) 0:00:05.028 ******* 2026-02-02 03:22:14.283184 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6) 2026-02-02 03:22:14.283192 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6) 2026-02-02 03:22:14.283199 | orchestrator | 2026-02-02 03:22:14.283207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:14.283214 | orchestrator | Monday 02 February 2026 03:22:11 +0000 (0:00:00.954) 0:00:05.982 ******* 2026-02-02 03:22:14.283221 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-02 03:22:14.283228 | orchestrator | 2026-02-02 03:22:14.283234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:14.283240 | orchestrator | Monday 02 February 2026 03:22:12 +0000 (0:00:00.406) 0:00:06.389 ******* 2026-02-02 03:22:14.283247 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-02 03:22:14.283253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-02 03:22:14.283259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-02 03:22:14.283265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-02 03:22:14.283272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-02 03:22:14.283278 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-02 03:22:14.283284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-02 03:22:14.283290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-02 03:22:14.283296 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-02 03:22:14.283302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-02 03:22:14.283309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-02 03:22:14.283315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-02 03:22:14.283321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-02 03:22:14.283327 | orchestrator | 2026-02-02 03:22:14.283334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:14.283340 | orchestrator | Monday 02 February 2026 03:22:12 +0000 (0:00:00.412) 0:00:06.801 ******* 2026-02-02 03:22:14.283346 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.283352 | orchestrator | 2026-02-02 03:22:14.283359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:14.283365 | orchestrator | Monday 02 February 2026 03:22:12 +0000 (0:00:00.211) 0:00:07.013 ******* 2026-02-02 03:22:14.283371 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.283397 | orchestrator | 2026-02-02 03:22:14.283404 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:14.283410 | orchestrator | Monday 02 February 2026 03:22:13 +0000 (0:00:00.211) 0:00:07.225 ******* 2026-02-02 03:22:14.283416 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.283423 | orchestrator | 2026-02-02 03:22:14.283429 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:14.283435 | orchestrator | Monday 02 February 2026 03:22:13 +0000 (0:00:00.228) 0:00:07.453 ******* 2026-02-02 03:22:14.283446 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.283452 | orchestrator | 2026-02-02 03:22:14.283459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:14.283465 | orchestrator | Monday 02 February 2026 03:22:13 +0000 (0:00:00.246) 0:00:07.700 ******* 2026-02-02 03:22:14.283471 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.283478 | orchestrator | 2026-02-02 03:22:14.283484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:14.283490 | orchestrator | Monday 02 February 2026 03:22:13 +0000 (0:00:00.212) 0:00:07.912 ******* 2026-02-02 03:22:14.283497 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.283503 | orchestrator | 2026-02-02 03:22:14.283509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:14.283515 | orchestrator | Monday 02 February 2026 03:22:14 +0000 (0:00:00.233) 0:00:08.145 ******* 2026-02-02 03:22:14.283522 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:14.283528 | orchestrator | 2026-02-02 03:22:14.283540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:22.817703 | orchestrator | Monday 02 February 2026 03:22:14 +0000 (0:00:00.212) 0:00:08.358 ******* 2026-02-02 03:22:22.817793 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.817803 | orchestrator | 2026-02-02 03:22:22.817822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:22.817829 | orchestrator | Monday 02 February 2026 03:22:14 +0000 (0:00:00.201) 0:00:08.559 ******* 2026-02-02 03:22:22.817836 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-02 03:22:22.817844 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-02 03:22:22.817850 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-02 03:22:22.817869 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-02 03:22:22.817875 | orchestrator | 2026-02-02 03:22:22.817882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:22.817888 | orchestrator | Monday 02 February 2026 03:22:15 +0000 (0:00:01.186) 0:00:09.746 ******* 2026-02-02 03:22:22.817895 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.817902 | orchestrator | 2026-02-02 03:22:22.817908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:22.817915 | orchestrator | Monday 02 February 2026 03:22:15 +0000 (0:00:00.221) 0:00:09.967 ******* 2026-02-02 03:22:22.817921 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.817927 | orchestrator | 2026-02-02 03:22:22.817934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:22.817940 | orchestrator | Monday 02 February 2026 03:22:16 +0000 (0:00:00.262) 0:00:10.230 ******* 2026-02-02 03:22:22.817946 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.817953 | orchestrator | 2026-02-02 03:22:22.817959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:22.817965 | orchestrator | Monday 02 February 2026 03:22:16 +0000 (0:00:00.229) 0:00:10.459 ******* 2026-02-02 03:22:22.817972 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.817978 | orchestrator | 2026-02-02 03:22:22.817984 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-02 03:22:22.817991 | orchestrator | Monday 02 February 2026 03:22:16 +0000 (0:00:00.268) 0:00:10.728 ******* 2026-02-02 03:22:22.817997 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-02 03:22:22.818004 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-02 03:22:22.818010 | orchestrator | 2026-02-02 03:22:22.818055 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-02 03:22:22.818062 | orchestrator | Monday 02 February 2026 03:22:16 +0000 (0:00:00.178) 0:00:10.906 ******* 2026-02-02 03:22:22.818107 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.818114 | orchestrator | 2026-02-02 03:22:22.818121 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-02 03:22:22.818127 | orchestrator | Monday 02 February 2026 03:22:16 +0000 (0:00:00.146) 0:00:11.053 ******* 2026-02-02 03:22:22.818148 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.818155 | orchestrator | 2026-02-02 03:22:22.818162 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-02 03:22:22.818168 | orchestrator | Monday 02 February 2026 03:22:17 +0000 (0:00:00.139) 0:00:11.193 ******* 2026-02-02 03:22:22.818197 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.818204 | orchestrator | 2026-02-02 03:22:22.818211 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-02 03:22:22.818217 | orchestrator | Monday 02 February 2026 03:22:17 +0000 (0:00:00.139) 0:00:11.332 ******* 2026-02-02 03:22:22.818224 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:22:22.818230 | orchestrator | 2026-02-02 03:22:22.818236 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-02 03:22:22.818242 | orchestrator | Monday 02 February 2026 03:22:17 +0000 (0:00:00.163) 0:00:11.496 ******* 2026-02-02 03:22:22.818250 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b8f5a57-fc4d-5c4a-8869-764dca19b379'}}) 2026-02-02 03:22:22.818256 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af42a967-eb71-546a-abb0-a5185990ed2a'}}) 2026-02-02 03:22:22.818264 | orchestrator | 2026-02-02 03:22:22.818271 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-02 03:22:22.818278 | orchestrator | Monday 02 February 2026 03:22:17 +0000 (0:00:00.231) 0:00:11.728 ******* 2026-02-02 03:22:22.818289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b8f5a57-fc4d-5c4a-8869-764dca19b379'}})  2026-02-02 03:22:22.818301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af42a967-eb71-546a-abb0-a5185990ed2a'}})  2026-02-02 03:22:22.818312 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.818327 | orchestrator | 2026-02-02 03:22:22.818341 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-02 03:22:22.818351 | orchestrator | Monday 02 February 2026 03:22:18 +0000 (0:00:00.383) 0:00:12.112 ******* 2026-02-02 03:22:22.818362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b8f5a57-fc4d-5c4a-8869-764dca19b379'}})  2026-02-02 03:22:22.818372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af42a967-eb71-546a-abb0-a5185990ed2a'}})  2026-02-02 03:22:22.818382 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.818392 | orchestrator | 2026-02-02 03:22:22.818402 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-02 03:22:22.818412 | orchestrator | Monday 02 February 2026 03:22:18 +0000 (0:00:00.168) 0:00:12.281 ******* 2026-02-02 03:22:22.818422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b8f5a57-fc4d-5c4a-8869-764dca19b379'}})  2026-02-02 03:22:22.818448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af42a967-eb71-546a-abb0-a5185990ed2a'}})  2026-02-02 03:22:22.818459 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.818469 | orchestrator | 2026-02-02 03:22:22.818480 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-02 03:22:22.818491 | orchestrator | Monday 02 February 2026 03:22:18 +0000 (0:00:00.164) 0:00:12.446 ******* 2026-02-02 03:22:22.818502 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:22:22.818511 | orchestrator | 2026-02-02 03:22:22.818522 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-02 03:22:22.818540 | orchestrator | Monday 02 February 2026 03:22:18 +0000 (0:00:00.160) 0:00:12.606 ******* 2026-02-02 03:22:22.818552 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:22:22.818563 | orchestrator | 2026-02-02 03:22:22.818574 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-02 03:22:22.818584 | orchestrator | Monday 02 February 2026 03:22:18 +0000 (0:00:00.153) 0:00:12.759 ******* 2026-02-02 03:22:22.818605 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.818616 | orchestrator | 2026-02-02 03:22:22.818626 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-02 03:22:22.818637 | orchestrator | Monday 02 February 2026 03:22:18 +0000 (0:00:00.152) 0:00:12.912 ******* 2026-02-02 03:22:22.818648 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.818658 | orchestrator | 2026-02-02 03:22:22.818669 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-02 03:22:22.818680 | orchestrator | Monday 02 February 2026 03:22:18 +0000 (0:00:00.144) 0:00:13.057 ******* 2026-02-02 03:22:22.818690 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.818701 | orchestrator | 2026-02-02 03:22:22.818711 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-02 03:22:22.818721 | orchestrator | Monday 02 February 2026 03:22:19 +0000 (0:00:00.168) 0:00:13.225 ******* 2026-02-02 03:22:22.818732 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 03:22:22.818743 | orchestrator |  "ceph_osd_devices": { 2026-02-02 03:22:22.818753 | orchestrator |  "sdb": { 2026-02-02 03:22:22.818765 | orchestrator |  "osd_lvm_uuid": "2b8f5a57-fc4d-5c4a-8869-764dca19b379" 2026-02-02 03:22:22.818775 | orchestrator |  }, 2026-02-02 03:22:22.818786 | orchestrator |  "sdc": { 2026-02-02 03:22:22.818797 | orchestrator |  "osd_lvm_uuid": "af42a967-eb71-546a-abb0-a5185990ed2a" 2026-02-02 03:22:22.818807 | orchestrator |  } 2026-02-02 03:22:22.818817 | orchestrator |  } 2026-02-02 03:22:22.818828 | orchestrator | } 2026-02-02 03:22:22.818838 | orchestrator | 2026-02-02 03:22:22.818849 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-02 03:22:22.818859 | orchestrator | Monday 02 February 2026 03:22:19 +0000 (0:00:00.169) 0:00:13.395 ******* 2026-02-02 03:22:22.818870 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.818880 | orchestrator | 2026-02-02 03:22:22.818891 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-02 03:22:22.818901 | orchestrator | Monday 02 February 2026 03:22:19 +0000 (0:00:00.183) 0:00:13.579 ******* 2026-02-02 03:22:22.818912 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.818922 | orchestrator | 2026-02-02 03:22:22.818933 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-02 03:22:22.818943 | orchestrator | Monday 02 February 2026 03:22:19 +0000 (0:00:00.157) 0:00:13.736 ******* 2026-02-02 03:22:22.818954 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:22:22.818964 | orchestrator | 2026-02-02 03:22:22.818974 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-02 03:22:22.818984 | orchestrator | Monday 02 February 2026 03:22:19 +0000 (0:00:00.148) 0:00:13.884 ******* 2026-02-02 03:22:22.818994 | orchestrator | changed: [testbed-node-3] => { 2026-02-02 03:22:22.819005 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-02 03:22:22.819015 | orchestrator |  "ceph_osd_devices": { 2026-02-02 03:22:22.819026 | orchestrator |  "sdb": { 2026-02-02 03:22:22.819036 | orchestrator |  "osd_lvm_uuid": "2b8f5a57-fc4d-5c4a-8869-764dca19b379" 2026-02-02 03:22:22.819046 | orchestrator |  }, 2026-02-02 03:22:22.819057 | orchestrator |  "sdc": { 2026-02-02 03:22:22.819137 | orchestrator |  "osd_lvm_uuid": "af42a967-eb71-546a-abb0-a5185990ed2a" 2026-02-02 03:22:22.819150 | orchestrator |  } 2026-02-02 03:22:22.819160 | orchestrator |  }, 2026-02-02 03:22:22.819171 | orchestrator |  "lvm_volumes": [ 2026-02-02 03:22:22.819181 | orchestrator |  { 2026-02-02 03:22:22.819193 | orchestrator |  "data": "osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379", 2026-02-02 03:22:22.819204 | orchestrator |  "data_vg": "ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379" 2026-02-02 03:22:22.819214 | orchestrator |  }, 2026-02-02 03:22:22.819225 | orchestrator |  { 2026-02-02 03:22:22.819235 | orchestrator |  "data": "osd-block-af42a967-eb71-546a-abb0-a5185990ed2a", 2026-02-02 03:22:22.819253 | orchestrator |  "data_vg": "ceph-af42a967-eb71-546a-abb0-a5185990ed2a" 2026-02-02 03:22:22.819263 | orchestrator |  } 2026-02-02 03:22:22.819273 | orchestrator |  ] 2026-02-02 03:22:22.819283 | orchestrator |  } 2026-02-02 03:22:22.819294 | orchestrator | } 2026-02-02 03:22:22.819304 | orchestrator | 2026-02-02 03:22:22.819314 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-02 03:22:22.819324 | orchestrator | Monday 02 February 2026 03:22:20 +0000 (0:00:00.466) 0:00:14.351 ******* 2026-02-02 03:22:22.819335 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 03:22:22.819341 | orchestrator | 2026-02-02 03:22:22.819347 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-02 03:22:22.819353 | orchestrator | 2026-02-02 03:22:22.819359 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-02 03:22:22.819366 | orchestrator | Monday 02 February 2026 03:22:22 +0000 (0:00:02.004) 0:00:16.355 ******* 2026-02-02 03:22:22.819372 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-02 03:22:22.819378 | orchestrator | 2026-02-02 03:22:22.819384 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-02 03:22:22.819390 | orchestrator | Monday 02 February 2026 03:22:22 +0000 (0:00:00.298) 0:00:16.653 ******* 2026-02-02 03:22:22.819396 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:22:22.819403 | orchestrator | 2026-02-02 03:22:22.819416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839340 | orchestrator | Monday 02 February 2026 03:22:22 +0000 (0:00:00.247) 0:00:16.901 ******* 2026-02-02 03:22:32.839432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-02 03:22:32.839444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-02 03:22:32.839452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-02 03:22:32.839474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-02 03:22:32.839481 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-02 03:22:32.839489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-02 03:22:32.839497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-02 03:22:32.839504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-02 03:22:32.839512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-02 03:22:32.839519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-02 03:22:32.839526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-02 03:22:32.839533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-02 03:22:32.839540 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-02 03:22:32.839548 | orchestrator | 2026-02-02 03:22:32.839556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839563 | orchestrator | Monday 02 February 2026 03:22:23 +0000 (0:00:00.402) 0:00:17.304 ******* 2026-02-02 03:22:32.839571 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.839579 | orchestrator | 2026-02-02 03:22:32.839586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839594 | orchestrator | Monday 02 February 2026 03:22:23 +0000 (0:00:00.218) 0:00:17.522 ******* 2026-02-02 03:22:32.839601 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.839608 | orchestrator | 2026-02-02 03:22:32.839616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839623 | orchestrator | Monday 02 February 2026 03:22:23 +0000 (0:00:00.225) 0:00:17.748 ******* 2026-02-02 03:22:32.839650 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.839658 | orchestrator | 2026-02-02 03:22:32.839665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839672 | orchestrator | Monday 02 February 2026 03:22:23 +0000 (0:00:00.220) 0:00:17.968 ******* 2026-02-02 03:22:32.839679 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.839687 | orchestrator | 2026-02-02 03:22:32.839694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839702 | orchestrator | Monday 02 February 2026 03:22:24 +0000 (0:00:00.683) 0:00:18.651 ******* 2026-02-02 03:22:32.839709 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.839716 | orchestrator | 2026-02-02 03:22:32.839723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839730 | orchestrator | Monday 02 February 2026 03:22:24 +0000 (0:00:00.223) 0:00:18.875 ******* 2026-02-02 03:22:32.839738 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.839745 | orchestrator | 2026-02-02 03:22:32.839752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839759 | orchestrator | Monday 02 February 2026 03:22:25 +0000 (0:00:00.234) 0:00:19.110 ******* 2026-02-02 03:22:32.839767 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.839774 | orchestrator | 2026-02-02 03:22:32.839781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839789 | orchestrator | Monday 02 February 2026 03:22:25 +0000 (0:00:00.214) 0:00:19.324 ******* 2026-02-02 03:22:32.839796 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.839803 | orchestrator | 2026-02-02 03:22:32.839810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839818 | orchestrator | Monday 02 February 2026 03:22:25 +0000 (0:00:00.269) 0:00:19.593 ******* 2026-02-02 03:22:32.839825 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111) 2026-02-02 03:22:32.839833 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111) 2026-02-02 03:22:32.839841 | orchestrator | 2026-02-02 03:22:32.839848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839856 | orchestrator | Monday 02 February 2026 03:22:25 +0000 (0:00:00.452) 0:00:20.046 ******* 2026-02-02 03:22:32.839863 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5) 2026-02-02 03:22:32.839870 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5) 2026-02-02 03:22:32.839877 | orchestrator | 2026-02-02 03:22:32.839885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839892 | orchestrator | Monday 02 February 2026 03:22:26 +0000 (0:00:00.534) 0:00:20.580 ******* 2026-02-02 03:22:32.839899 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28) 2026-02-02 03:22:32.839906 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28) 2026-02-02 03:22:32.839914 | orchestrator | 2026-02-02 03:22:32.839921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839940 | orchestrator | Monday 02 February 2026 03:22:26 +0000 (0:00:00.470) 0:00:21.050 ******* 2026-02-02 03:22:32.839948 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012) 2026-02-02 03:22:32.839955 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012) 2026-02-02 03:22:32.839962 | orchestrator | 2026-02-02 03:22:32.839970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:32.839981 | orchestrator | Monday 02 February 2026 03:22:27 +0000 (0:00:00.717) 0:00:21.768 ******* 2026-02-02 03:22:32.839988 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-02 03:22:32.840001 | orchestrator | 2026-02-02 03:22:32.840009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:32.840016 | orchestrator | Monday 02 February 2026 03:22:28 +0000 (0:00:00.703) 0:00:22.472 ******* 2026-02-02 03:22:32.840023 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-02 03:22:32.840031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-02 03:22:32.840038 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-02 03:22:32.840045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-02 03:22:32.840052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-02 03:22:32.840084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-02 03:22:32.840097 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-02 03:22:32.840109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-02 03:22:32.840142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-02 03:22:32.840150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-02 03:22:32.840158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-02 03:22:32.840165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-02 03:22:32.840172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-02 03:22:32.840180 | orchestrator | 2026-02-02 03:22:32.840187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:32.840194 | orchestrator | Monday 02 February 2026 03:22:29 +0000 (0:00:00.934) 0:00:23.406 ******* 2026-02-02 03:22:32.840201 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.840209 | orchestrator | 2026-02-02 03:22:32.840216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:32.840223 | orchestrator | Monday 02 February 2026 03:22:29 +0000 (0:00:00.225) 0:00:23.632 ******* 2026-02-02 03:22:32.840230 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.840237 | orchestrator | 2026-02-02 03:22:32.840245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:32.840252 | orchestrator | Monday 02 February 2026 03:22:29 +0000 (0:00:00.227) 0:00:23.860 ******* 2026-02-02 03:22:32.840259 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.840266 | orchestrator | 2026-02-02 03:22:32.840273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:32.840281 | orchestrator | Monday 02 February 2026 03:22:30 +0000 (0:00:00.241) 0:00:24.101 ******* 2026-02-02 03:22:32.840288 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.840295 | orchestrator | 2026-02-02 03:22:32.840302 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:32.840310 | orchestrator | Monday 02 February 2026 03:22:30 +0000 (0:00:00.279) 0:00:24.380 ******* 2026-02-02 03:22:32.840317 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.840324 | orchestrator | 2026-02-02 03:22:32.840331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:32.840338 | orchestrator | Monday 02 February 2026 03:22:30 +0000 (0:00:00.231) 0:00:24.612 ******* 2026-02-02 03:22:32.840346 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.840353 | orchestrator | 2026-02-02 03:22:32.840360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:32.840367 | orchestrator | Monday 02 February 2026 03:22:30 +0000 (0:00:00.207) 0:00:24.819 ******* 2026-02-02 03:22:32.840375 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.840388 | orchestrator | 2026-02-02 03:22:32.840395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:32.840402 | orchestrator | Monday 02 February 2026 03:22:30 +0000 (0:00:00.219) 0:00:25.038 ******* 2026-02-02 03:22:32.840409 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:32.840417 | orchestrator | 2026-02-02 03:22:32.840424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:32.840431 | orchestrator | Monday 02 February 2026 03:22:31 +0000 (0:00:00.249) 0:00:25.288 ******* 2026-02-02 03:22:32.840438 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-02 03:22:32.840447 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-02 03:22:32.840454 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-02 03:22:32.840461 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-02 03:22:32.840469 | orchestrator | 2026-02-02 03:22:32.840476 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:32.840483 | orchestrator | Monday 02 February 2026 03:22:32 +0000 (0:00:00.936) 0:00:26.224 ******* 2026-02-02 03:22:32.840490 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.179296 | orchestrator | 2026-02-02 03:22:39.179407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:39.179421 | orchestrator | Monday 02 February 2026 03:22:32 +0000 (0:00:00.698) 0:00:26.923 ******* 2026-02-02 03:22:39.179431 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.179440 | orchestrator | 2026-02-02 03:22:39.179449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:39.179458 | orchestrator | Monday 02 February 2026 03:22:33 +0000 (0:00:00.232) 0:00:27.155 ******* 2026-02-02 03:22:39.179480 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.179489 | orchestrator | 2026-02-02 03:22:39.179497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:39.179505 | orchestrator | Monday 02 February 2026 03:22:33 +0000 (0:00:00.218) 0:00:27.374 ******* 2026-02-02 03:22:39.179514 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.179522 | orchestrator | 2026-02-02 03:22:39.179530 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-02 03:22:39.179539 | orchestrator | Monday 02 February 2026 03:22:33 +0000 (0:00:00.259) 0:00:27.634 ******* 2026-02-02 03:22:39.179547 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-02 03:22:39.179556 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-02 03:22:39.179564 | orchestrator | 2026-02-02 03:22:39.179572 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-02 03:22:39.179580 | orchestrator | Monday 02 February 2026 03:22:33 +0000 (0:00:00.186) 0:00:27.821 ******* 2026-02-02 03:22:39.179588 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.179597 | orchestrator | 2026-02-02 03:22:39.179605 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-02 03:22:39.179613 | orchestrator | Monday 02 February 2026 03:22:33 +0000 (0:00:00.151) 0:00:27.972 ******* 2026-02-02 03:22:39.179621 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.179629 | orchestrator | 2026-02-02 03:22:39.179637 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-02 03:22:39.179645 | orchestrator | Monday 02 February 2026 03:22:34 +0000 (0:00:00.157) 0:00:28.130 ******* 2026-02-02 03:22:39.179653 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.179661 | orchestrator | 2026-02-02 03:22:39.179670 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-02 03:22:39.179678 | orchestrator | Monday 02 February 2026 03:22:34 +0000 (0:00:00.136) 0:00:28.267 ******* 2026-02-02 03:22:39.179686 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:22:39.179695 | orchestrator | 2026-02-02 03:22:39.179703 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-02 03:22:39.179711 | orchestrator | Monday 02 February 2026 03:22:34 +0000 (0:00:00.152) 0:00:28.419 ******* 2026-02-02 03:22:39.179741 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}}) 2026-02-02 03:22:39.179751 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '106e1245-4ea8-54a2-9b27-5c2b147fae19'}}) 2026-02-02 03:22:39.179760 | orchestrator | 2026-02-02 03:22:39.179768 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-02 03:22:39.179776 | orchestrator | Monday 02 February 2026 03:22:34 +0000 (0:00:00.206) 0:00:28.626 ******* 2026-02-02 03:22:39.179785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}})  2026-02-02 03:22:39.179795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '106e1245-4ea8-54a2-9b27-5c2b147fae19'}})  2026-02-02 03:22:39.179803 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.179811 | orchestrator | 2026-02-02 03:22:39.179819 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-02 03:22:39.179828 | orchestrator | Monday 02 February 2026 03:22:34 +0000 (0:00:00.163) 0:00:28.790 ******* 2026-02-02 03:22:39.179836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}})  2026-02-02 03:22:39.179844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '106e1245-4ea8-54a2-9b27-5c2b147fae19'}})  2026-02-02 03:22:39.179852 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.179860 | orchestrator | 2026-02-02 03:22:39.179868 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-02 03:22:39.179876 | orchestrator | Monday 02 February 2026 03:22:35 +0000 (0:00:00.393) 0:00:29.183 ******* 2026-02-02 03:22:39.179884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}})  2026-02-02 03:22:39.179892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '106e1245-4ea8-54a2-9b27-5c2b147fae19'}})  2026-02-02 03:22:39.179900 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.179909 | orchestrator | 2026-02-02 03:22:39.179922 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-02 03:22:39.179935 | orchestrator | Monday 02 February 2026 03:22:35 +0000 (0:00:00.173) 0:00:29.356 ******* 2026-02-02 03:22:39.179948 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:22:39.179960 | orchestrator | 2026-02-02 03:22:39.179973 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-02 03:22:39.179987 | orchestrator | Monday 02 February 2026 03:22:35 +0000 (0:00:00.147) 0:00:29.504 ******* 2026-02-02 03:22:39.180001 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:22:39.180014 | orchestrator | 2026-02-02 03:22:39.180028 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-02 03:22:39.180038 | orchestrator | Monday 02 February 2026 03:22:35 +0000 (0:00:00.153) 0:00:29.657 ******* 2026-02-02 03:22:39.180119 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.180130 | orchestrator | 2026-02-02 03:22:39.180138 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-02 03:22:39.180152 | orchestrator | Monday 02 February 2026 03:22:35 +0000 (0:00:00.149) 0:00:29.806 ******* 2026-02-02 03:22:39.180167 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.180181 | orchestrator | 2026-02-02 03:22:39.180196 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-02 03:22:39.180205 | orchestrator | Monday 02 February 2026 03:22:35 +0000 (0:00:00.157) 0:00:29.964 ******* 2026-02-02 03:22:39.180219 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.180227 | orchestrator | 2026-02-02 03:22:39.180235 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-02 03:22:39.180243 | orchestrator | Monday 02 February 2026 03:22:36 +0000 (0:00:00.129) 0:00:30.093 ******* 2026-02-02 03:22:39.180259 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 03:22:39.180267 | orchestrator |  "ceph_osd_devices": { 2026-02-02 03:22:39.180275 | orchestrator |  "sdb": { 2026-02-02 03:22:39.180283 | orchestrator |  "osd_lvm_uuid": "6932a8d0-72db-59d0-a33a-0c6e2cbd6a89" 2026-02-02 03:22:39.180292 | orchestrator |  }, 2026-02-02 03:22:39.180300 | orchestrator |  "sdc": { 2026-02-02 03:22:39.180308 | orchestrator |  "osd_lvm_uuid": "106e1245-4ea8-54a2-9b27-5c2b147fae19" 2026-02-02 03:22:39.180316 | orchestrator |  } 2026-02-02 03:22:39.180324 | orchestrator |  } 2026-02-02 03:22:39.180332 | orchestrator | } 2026-02-02 03:22:39.180340 | orchestrator | 2026-02-02 03:22:39.180348 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-02 03:22:39.180356 | orchestrator | Monday 02 February 2026 03:22:36 +0000 (0:00:00.145) 0:00:30.239 ******* 2026-02-02 03:22:39.180364 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.180372 | orchestrator | 2026-02-02 03:22:39.180397 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-02 03:22:39.180411 | orchestrator | Monday 02 February 2026 03:22:36 +0000 (0:00:00.164) 0:00:30.403 ******* 2026-02-02 03:22:39.180424 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.180438 | orchestrator | 2026-02-02 03:22:39.180451 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-02 03:22:39.180462 | orchestrator | Monday 02 February 2026 03:22:36 +0000 (0:00:00.155) 0:00:30.558 ******* 2026-02-02 03:22:39.180471 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:22:39.180478 | orchestrator | 2026-02-02 03:22:39.180486 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-02 03:22:39.180494 | orchestrator | Monday 02 February 2026 03:22:36 +0000 (0:00:00.137) 0:00:30.696 ******* 2026-02-02 03:22:39.180502 | orchestrator | changed: [testbed-node-4] => { 2026-02-02 03:22:39.180510 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-02 03:22:39.180518 | orchestrator |  "ceph_osd_devices": { 2026-02-02 03:22:39.180526 | orchestrator |  "sdb": { 2026-02-02 03:22:39.180534 | orchestrator |  "osd_lvm_uuid": "6932a8d0-72db-59d0-a33a-0c6e2cbd6a89" 2026-02-02 03:22:39.180542 | orchestrator |  }, 2026-02-02 03:22:39.180550 | orchestrator |  "sdc": { 2026-02-02 03:22:39.180558 | orchestrator |  "osd_lvm_uuid": "106e1245-4ea8-54a2-9b27-5c2b147fae19" 2026-02-02 03:22:39.180566 | orchestrator |  } 2026-02-02 03:22:39.180574 | orchestrator |  }, 2026-02-02 03:22:39.180582 | orchestrator |  "lvm_volumes": [ 2026-02-02 03:22:39.180590 | orchestrator |  { 2026-02-02 03:22:39.180598 | orchestrator |  "data": "osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89", 2026-02-02 03:22:39.180606 | orchestrator |  "data_vg": "ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89" 2026-02-02 03:22:39.180613 | orchestrator |  }, 2026-02-02 03:22:39.180621 | orchestrator |  { 2026-02-02 03:22:39.180629 | orchestrator |  "data": "osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19", 2026-02-02 03:22:39.180637 | orchestrator |  "data_vg": "ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19" 2026-02-02 03:22:39.180645 | orchestrator |  } 2026-02-02 03:22:39.180653 | orchestrator |  ] 2026-02-02 03:22:39.180661 | orchestrator |  } 2026-02-02 03:22:39.180674 | orchestrator | } 2026-02-02 03:22:39.180688 | orchestrator | 2026-02-02 03:22:39.180702 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-02 03:22:39.180716 | orchestrator | Monday 02 February 2026 03:22:37 +0000 (0:00:00.460) 0:00:31.156 ******* 2026-02-02 03:22:39.180729 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-02 03:22:39.180743 | orchestrator | 2026-02-02 03:22:39.180751 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-02 03:22:39.180759 | orchestrator | 2026-02-02 03:22:39.180767 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-02 03:22:39.180775 | orchestrator | Monday 02 February 2026 03:22:38 +0000 (0:00:01.206) 0:00:32.363 ******* 2026-02-02 03:22:39.180789 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-02 03:22:39.180797 | orchestrator | 2026-02-02 03:22:39.180805 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-02 03:22:39.180813 | orchestrator | Monday 02 February 2026 03:22:38 +0000 (0:00:00.258) 0:00:32.621 ******* 2026-02-02 03:22:39.180821 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:22:39.180829 | orchestrator | 2026-02-02 03:22:39.180837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:39.180845 | orchestrator | Monday 02 February 2026 03:22:38 +0000 (0:00:00.234) 0:00:32.856 ******* 2026-02-02 03:22:39.180852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-02 03:22:39.180860 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-02 03:22:39.180868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-02 03:22:39.180876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-02 03:22:39.180884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-02 03:22:39.180899 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-02 03:22:48.443987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-02 03:22:48.444156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-02 03:22:48.444194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-02 03:22:48.444206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-02 03:22:48.444232 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-02 03:22:48.444243 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-02 03:22:48.444255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-02 03:22:48.444266 | orchestrator | 2026-02-02 03:22:48.444278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:48.444289 | orchestrator | Monday 02 February 2026 03:22:39 +0000 (0:00:00.405) 0:00:33.262 ******* 2026-02-02 03:22:48.444300 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.444311 | orchestrator | 2026-02-02 03:22:48.444321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:48.444327 | orchestrator | Monday 02 February 2026 03:22:39 +0000 (0:00:00.208) 0:00:33.470 ******* 2026-02-02 03:22:48.444334 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.444340 | orchestrator | 2026-02-02 03:22:48.444347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:48.444353 | orchestrator | Monday 02 February 2026 03:22:39 +0000 (0:00:00.252) 0:00:33.723 ******* 2026-02-02 03:22:48.444359 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.444365 | orchestrator | 2026-02-02 03:22:48.444372 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:48.444378 | orchestrator | Monday 02 February 2026 03:22:39 +0000 (0:00:00.219) 0:00:33.942 ******* 2026-02-02 03:22:48.444384 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.444390 | orchestrator | 2026-02-02 03:22:48.444397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:48.444403 | orchestrator | Monday 02 February 2026 03:22:40 +0000 (0:00:00.670) 0:00:34.613 ******* 2026-02-02 03:22:48.444409 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.444415 | orchestrator | 2026-02-02 03:22:48.444422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:48.444428 | orchestrator | Monday 02 February 2026 03:22:40 +0000 (0:00:00.212) 0:00:34.826 ******* 2026-02-02 03:22:48.444451 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.444459 | orchestrator | 2026-02-02 03:22:48.444467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:48.444476 | orchestrator | Monday 02 February 2026 03:22:40 +0000 (0:00:00.210) 0:00:35.036 ******* 2026-02-02 03:22:48.444486 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.444496 | orchestrator | 2026-02-02 03:22:48.444506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:48.444517 | orchestrator | Monday 02 February 2026 03:22:41 +0000 (0:00:00.206) 0:00:35.242 ******* 2026-02-02 03:22:48.444529 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.444539 | orchestrator | 2026-02-02 03:22:48.444550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:48.444559 | orchestrator | Monday 02 February 2026 03:22:41 +0000 (0:00:00.238) 0:00:35.481 ******* 2026-02-02 03:22:48.444567 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73) 2026-02-02 03:22:48.444575 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73) 2026-02-02 03:22:48.444582 | orchestrator | 2026-02-02 03:22:48.444590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:48.444597 | orchestrator | Monday 02 February 2026 03:22:41 +0000 (0:00:00.470) 0:00:35.952 ******* 2026-02-02 03:22:48.444604 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40) 2026-02-02 03:22:48.444611 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40) 2026-02-02 03:22:48.444618 | orchestrator | 2026-02-02 03:22:48.444625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:48.444632 | orchestrator | Monday 02 February 2026 03:22:42 +0000 (0:00:00.471) 0:00:36.423 ******* 2026-02-02 03:22:48.444639 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b) 2026-02-02 03:22:48.444646 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b) 2026-02-02 03:22:48.444653 | orchestrator | 2026-02-02 03:22:48.444660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:48.444667 | orchestrator | Monday 02 February 2026 03:22:42 +0000 (0:00:00.450) 0:00:36.874 ******* 2026-02-02 03:22:48.444675 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359) 2026-02-02 03:22:48.444682 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359) 2026-02-02 03:22:48.444690 | orchestrator | 2026-02-02 03:22:48.444697 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:22:48.444707 | orchestrator | Monday 02 February 2026 03:22:43 +0000 (0:00:00.464) 0:00:37.339 ******* 2026-02-02 03:22:48.444717 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-02 03:22:48.444727 | orchestrator | 2026-02-02 03:22:48.444737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.444767 | orchestrator | Monday 02 February 2026 03:22:43 +0000 (0:00:00.371) 0:00:37.711 ******* 2026-02-02 03:22:48.444780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-02 03:22:48.444791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-02 03:22:48.444802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-02 03:22:48.444818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-02 03:22:48.444825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-02 03:22:48.444831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-02 03:22:48.444844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-02 03:22:48.444851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-02 03:22:48.444857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-02 03:22:48.444863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-02 03:22:48.444869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-02 03:22:48.444875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-02 03:22:48.444881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-02 03:22:48.444888 | orchestrator | 2026-02-02 03:22:48.444894 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.444900 | orchestrator | Monday 02 February 2026 03:22:44 +0000 (0:00:00.654) 0:00:38.365 ******* 2026-02-02 03:22:48.444907 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.444913 | orchestrator | 2026-02-02 03:22:48.444919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.444925 | orchestrator | Monday 02 February 2026 03:22:44 +0000 (0:00:00.242) 0:00:38.607 ******* 2026-02-02 03:22:48.444931 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.444937 | orchestrator | 2026-02-02 03:22:48.444944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.444950 | orchestrator | Monday 02 February 2026 03:22:44 +0000 (0:00:00.241) 0:00:38.848 ******* 2026-02-02 03:22:48.444956 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.444962 | orchestrator | 2026-02-02 03:22:48.444970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.444980 | orchestrator | Monday 02 February 2026 03:22:45 +0000 (0:00:00.245) 0:00:39.094 ******* 2026-02-02 03:22:48.444990 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.445001 | orchestrator | 2026-02-02 03:22:48.445011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.445022 | orchestrator | Monday 02 February 2026 03:22:45 +0000 (0:00:00.208) 0:00:39.302 ******* 2026-02-02 03:22:48.445033 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.445044 | orchestrator | 2026-02-02 03:22:48.445073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.445083 | orchestrator | Monday 02 February 2026 03:22:45 +0000 (0:00:00.220) 0:00:39.523 ******* 2026-02-02 03:22:48.445094 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.445106 | orchestrator | 2026-02-02 03:22:48.445112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.445118 | orchestrator | Monday 02 February 2026 03:22:45 +0000 (0:00:00.225) 0:00:39.748 ******* 2026-02-02 03:22:48.445125 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.445131 | orchestrator | 2026-02-02 03:22:48.445137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.445144 | orchestrator | Monday 02 February 2026 03:22:45 +0000 (0:00:00.222) 0:00:39.970 ******* 2026-02-02 03:22:48.445150 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.445156 | orchestrator | 2026-02-02 03:22:48.445163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.445173 | orchestrator | Monday 02 February 2026 03:22:46 +0000 (0:00:00.255) 0:00:40.225 ******* 2026-02-02 03:22:48.445184 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-02 03:22:48.445194 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-02 03:22:48.445205 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-02 03:22:48.445216 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-02 03:22:48.445226 | orchestrator | 2026-02-02 03:22:48.445245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.445255 | orchestrator | Monday 02 February 2026 03:22:47 +0000 (0:00:00.928) 0:00:41.154 ******* 2026-02-02 03:22:48.445265 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.445275 | orchestrator | 2026-02-02 03:22:48.445281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.445287 | orchestrator | Monday 02 February 2026 03:22:47 +0000 (0:00:00.195) 0:00:41.350 ******* 2026-02-02 03:22:48.445293 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.445300 | orchestrator | 2026-02-02 03:22:48.445306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.445316 | orchestrator | Monday 02 February 2026 03:22:47 +0000 (0:00:00.213) 0:00:41.563 ******* 2026-02-02 03:22:48.445326 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.445336 | orchestrator | 2026-02-02 03:22:48.445345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:22:48.445355 | orchestrator | Monday 02 February 2026 03:22:48 +0000 (0:00:00.735) 0:00:42.299 ******* 2026-02-02 03:22:48.445365 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:48.445373 | orchestrator | 2026-02-02 03:22:48.445390 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-02 03:22:52.887282 | orchestrator | Monday 02 February 2026 03:22:48 +0000 (0:00:00.225) 0:00:42.525 ******* 2026-02-02 03:22:52.887417 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-02 03:22:52.887441 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-02 03:22:52.887459 | orchestrator | 2026-02-02 03:22:52.887479 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-02 03:22:52.887519 | orchestrator | Monday 02 February 2026 03:22:48 +0000 (0:00:00.196) 0:00:42.721 ******* 2026-02-02 03:22:52.887531 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:52.887542 | orchestrator | 2026-02-02 03:22:52.887553 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-02 03:22:52.887563 | orchestrator | Monday 02 February 2026 03:22:48 +0000 (0:00:00.154) 0:00:42.876 ******* 2026-02-02 03:22:52.887573 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:52.887583 | orchestrator | 2026-02-02 03:22:52.887593 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-02 03:22:52.887603 | orchestrator | Monday 02 February 2026 03:22:48 +0000 (0:00:00.157) 0:00:43.033 ******* 2026-02-02 03:22:52.887612 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:52.887622 | orchestrator | 2026-02-02 03:22:52.887632 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-02 03:22:52.887642 | orchestrator | Monday 02 February 2026 03:22:49 +0000 (0:00:00.143) 0:00:43.176 ******* 2026-02-02 03:22:52.887652 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:22:52.887663 | orchestrator | 2026-02-02 03:22:52.887672 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-02 03:22:52.887682 | orchestrator | Monday 02 February 2026 03:22:49 +0000 (0:00:00.172) 0:00:43.349 ******* 2026-02-02 03:22:52.887692 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd54a22ee-8606-5662-853b-b39e232caa8f'}}) 2026-02-02 03:22:52.887708 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4fc6918-1796-5a48-9994-5f31e91196e6'}}) 2026-02-02 03:22:52.887725 | orchestrator | 2026-02-02 03:22:52.887741 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-02 03:22:52.887757 | orchestrator | Monday 02 February 2026 03:22:49 +0000 (0:00:00.194) 0:00:43.544 ******* 2026-02-02 03:22:52.887772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd54a22ee-8606-5662-853b-b39e232caa8f'}})  2026-02-02 03:22:52.887788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4fc6918-1796-5a48-9994-5f31e91196e6'}})  2026-02-02 03:22:52.887804 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:52.887850 | orchestrator | 2026-02-02 03:22:52.887865 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-02 03:22:52.887881 | orchestrator | Monday 02 February 2026 03:22:49 +0000 (0:00:00.172) 0:00:43.716 ******* 2026-02-02 03:22:52.887896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd54a22ee-8606-5662-853b-b39e232caa8f'}})  2026-02-02 03:22:52.887911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4fc6918-1796-5a48-9994-5f31e91196e6'}})  2026-02-02 03:22:52.887926 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:52.887940 | orchestrator | 2026-02-02 03:22:52.887954 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-02 03:22:52.887969 | orchestrator | Monday 02 February 2026 03:22:49 +0000 (0:00:00.193) 0:00:43.910 ******* 2026-02-02 03:22:52.887985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd54a22ee-8606-5662-853b-b39e232caa8f'}})  2026-02-02 03:22:52.888001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4fc6918-1796-5a48-9994-5f31e91196e6'}})  2026-02-02 03:22:52.888016 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:52.888031 | orchestrator | 2026-02-02 03:22:52.888123 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-02 03:22:52.888146 | orchestrator | Monday 02 February 2026 03:22:49 +0000 (0:00:00.148) 0:00:44.059 ******* 2026-02-02 03:22:52.888163 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:22:52.888180 | orchestrator | 2026-02-02 03:22:52.888196 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-02 03:22:52.888212 | orchestrator | Monday 02 February 2026 03:22:50 +0000 (0:00:00.143) 0:00:44.202 ******* 2026-02-02 03:22:52.888225 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:22:52.888235 | orchestrator | 2026-02-02 03:22:52.888245 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-02 03:22:52.888255 | orchestrator | Monday 02 February 2026 03:22:50 +0000 (0:00:00.415) 0:00:44.618 ******* 2026-02-02 03:22:52.888264 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:52.888274 | orchestrator | 2026-02-02 03:22:52.888284 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-02 03:22:52.888294 | orchestrator | Monday 02 February 2026 03:22:50 +0000 (0:00:00.146) 0:00:44.764 ******* 2026-02-02 03:22:52.888303 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:52.888313 | orchestrator | 2026-02-02 03:22:52.888323 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-02 03:22:52.888332 | orchestrator | Monday 02 February 2026 03:22:50 +0000 (0:00:00.151) 0:00:44.915 ******* 2026-02-02 03:22:52.888342 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:52.888351 | orchestrator | 2026-02-02 03:22:52.888361 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-02 03:22:52.888371 | orchestrator | Monday 02 February 2026 03:22:50 +0000 (0:00:00.157) 0:00:45.073 ******* 2026-02-02 03:22:52.888380 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 03:22:52.888404 | orchestrator |  "ceph_osd_devices": { 2026-02-02 03:22:52.888425 | orchestrator |  "sdb": { 2026-02-02 03:22:52.888458 | orchestrator |  "osd_lvm_uuid": "d54a22ee-8606-5662-853b-b39e232caa8f" 2026-02-02 03:22:52.888469 | orchestrator |  }, 2026-02-02 03:22:52.888479 | orchestrator |  "sdc": { 2026-02-02 03:22:52.888489 | orchestrator |  "osd_lvm_uuid": "e4fc6918-1796-5a48-9994-5f31e91196e6" 2026-02-02 03:22:52.888499 | orchestrator |  } 2026-02-02 03:22:52.888514 | orchestrator |  } 2026-02-02 03:22:52.888531 | orchestrator | } 2026-02-02 03:22:52.888548 | orchestrator | 2026-02-02 03:22:52.888577 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-02 03:22:52.888594 | orchestrator | Monday 02 February 2026 03:22:51 +0000 (0:00:00.147) 0:00:45.221 ******* 2026-02-02 03:22:52.888609 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:52.888641 | orchestrator | 2026-02-02 03:22:52.888659 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-02 03:22:52.888677 | orchestrator | Monday 02 February 2026 03:22:51 +0000 (0:00:00.148) 0:00:45.369 ******* 2026-02-02 03:22:52.888693 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:52.888709 | orchestrator | 2026-02-02 03:22:52.888725 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-02 03:22:52.888743 | orchestrator | Monday 02 February 2026 03:22:51 +0000 (0:00:00.135) 0:00:45.504 ******* 2026-02-02 03:22:52.888759 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:22:52.888774 | orchestrator | 2026-02-02 03:22:52.888790 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-02 03:22:52.888806 | orchestrator | Monday 02 February 2026 03:22:51 +0000 (0:00:00.131) 0:00:45.636 ******* 2026-02-02 03:22:52.888823 | orchestrator | changed: [testbed-node-5] => { 2026-02-02 03:22:52.888839 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-02 03:22:52.888857 | orchestrator |  "ceph_osd_devices": { 2026-02-02 03:22:52.888873 | orchestrator |  "sdb": { 2026-02-02 03:22:52.888890 | orchestrator |  "osd_lvm_uuid": "d54a22ee-8606-5662-853b-b39e232caa8f" 2026-02-02 03:22:52.888904 | orchestrator |  }, 2026-02-02 03:22:52.888915 | orchestrator |  "sdc": { 2026-02-02 03:22:52.888925 | orchestrator |  "osd_lvm_uuid": "e4fc6918-1796-5a48-9994-5f31e91196e6" 2026-02-02 03:22:52.888934 | orchestrator |  } 2026-02-02 03:22:52.888944 | orchestrator |  }, 2026-02-02 03:22:52.888954 | orchestrator |  "lvm_volumes": [ 2026-02-02 03:22:52.888964 | orchestrator |  { 2026-02-02 03:22:52.888974 | orchestrator |  "data": "osd-block-d54a22ee-8606-5662-853b-b39e232caa8f", 2026-02-02 03:22:52.888989 | orchestrator |  "data_vg": "ceph-d54a22ee-8606-5662-853b-b39e232caa8f" 2026-02-02 03:22:52.889007 | orchestrator |  }, 2026-02-02 03:22:52.889017 | orchestrator |  { 2026-02-02 03:22:52.889027 | orchestrator |  "data": "osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6", 2026-02-02 03:22:52.889036 | orchestrator |  "data_vg": "ceph-e4fc6918-1796-5a48-9994-5f31e91196e6" 2026-02-02 03:22:52.889069 | orchestrator |  } 2026-02-02 03:22:52.889080 | orchestrator |  ] 2026-02-02 03:22:52.889090 | orchestrator |  } 2026-02-02 03:22:52.889100 | orchestrator | } 2026-02-02 03:22:52.889109 | orchestrator | 2026-02-02 03:22:52.889119 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-02 03:22:52.889129 | orchestrator | Monday 02 February 2026 03:22:51 +0000 (0:00:00.226) 0:00:45.863 ******* 2026-02-02 03:22:52.889139 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-02 03:22:52.889148 | orchestrator | 2026-02-02 03:22:52.889158 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:22:52.889168 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 03:22:52.889180 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 03:22:52.889189 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 03:22:52.889199 | orchestrator | 2026-02-02 03:22:52.889209 | orchestrator | 2026-02-02 03:22:52.889219 | orchestrator | 2026-02-02 03:22:52.889228 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:22:52.889238 | orchestrator | Monday 02 February 2026 03:22:52 +0000 (0:00:01.095) 0:00:46.958 ******* 2026-02-02 03:22:52.889248 | orchestrator | =============================================================================== 2026-02-02 03:22:52.889257 | orchestrator | Write configuration file ------------------------------------------------ 4.31s 2026-02-02 03:22:52.889277 | orchestrator | Add known partitions to the list of available block devices ------------- 2.00s 2026-02-02 03:22:52.889286 | orchestrator | Add known links to the list of available block devices ------------------ 1.32s 2026-02-02 03:22:52.889296 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2026-02-02 03:22:52.889306 | orchestrator | Print configuration data ------------------------------------------------ 1.15s 2026-02-02 03:22:52.889315 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-02-02 03:22:52.889325 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2026-02-02 03:22:52.889335 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-02-02 03:22:52.889345 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.82s 2026-02-02 03:22:52.889354 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.76s 2026-02-02 03:22:52.889364 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-02-02 03:22:52.889374 | orchestrator | Get initial list of available block devices ----------------------------- 0.72s 2026-02-02 03:22:52.889383 | orchestrator | Set OSD devices config data --------------------------------------------- 0.72s 2026-02-02 03:22:52.889405 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.72s 2026-02-02 03:22:53.384723 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-02-02 03:22:53.384821 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-02-02 03:22:53.384834 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-02-02 03:22:53.384862 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-02-02 03:22:53.384875 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-02-02 03:22:53.384892 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-02-02 03:23:16.268784 | orchestrator | 2026-02-02 03:23:16 | INFO  | Task a893e804-da56-4dd7-8c24-7dd8b911cb79 (sync inventory) is running in background. Output coming soon. 2026-02-02 03:23:47.792696 | orchestrator | 2026-02-02 03:23:17 | INFO  | Starting group_vars file reorganization 2026-02-02 03:23:47.792805 | orchestrator | 2026-02-02 03:23:17 | INFO  | Moved 0 file(s) to their respective directories 2026-02-02 03:23:47.792819 | orchestrator | 2026-02-02 03:23:17 | INFO  | Group_vars file reorganization completed 2026-02-02 03:23:47.792827 | orchestrator | 2026-02-02 03:23:21 | INFO  | Starting variable preparation from inventory 2026-02-02 03:23:47.792837 | orchestrator | 2026-02-02 03:23:24 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-02 03:23:47.792845 | orchestrator | 2026-02-02 03:23:24 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-02 03:23:47.792853 | orchestrator | 2026-02-02 03:23:24 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-02 03:23:47.792861 | orchestrator | 2026-02-02 03:23:24 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-02 03:23:47.792869 | orchestrator | 2026-02-02 03:23:24 | INFO  | Variable preparation completed 2026-02-02 03:23:47.792877 | orchestrator | 2026-02-02 03:23:26 | INFO  | Starting inventory overwrite handling 2026-02-02 03:23:47.792884 | orchestrator | 2026-02-02 03:23:26 | INFO  | Handling group overwrites in 99-overwrite 2026-02-02 03:23:47.792892 | orchestrator | 2026-02-02 03:23:26 | INFO  | Removing group frr:children from 60-generic 2026-02-02 03:23:47.792900 | orchestrator | 2026-02-02 03:23:26 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-02 03:23:47.792908 | orchestrator | 2026-02-02 03:23:26 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-02 03:23:47.792941 | orchestrator | 2026-02-02 03:23:26 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-02 03:23:47.792949 | orchestrator | 2026-02-02 03:23:26 | INFO  | Handling group overwrites in 20-roles 2026-02-02 03:23:47.792957 | orchestrator | 2026-02-02 03:23:26 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-02 03:23:47.792965 | orchestrator | 2026-02-02 03:23:26 | INFO  | Removed 5 group(s) in total 2026-02-02 03:23:47.792973 | orchestrator | 2026-02-02 03:23:26 | INFO  | Inventory overwrite handling completed 2026-02-02 03:23:47.792980 | orchestrator | 2026-02-02 03:23:27 | INFO  | Starting merge of inventory files 2026-02-02 03:23:47.792988 | orchestrator | 2026-02-02 03:23:27 | INFO  | Inventory files merged successfully 2026-02-02 03:23:47.792996 | orchestrator | 2026-02-02 03:23:33 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-02 03:23:47.793044 | orchestrator | 2026-02-02 03:23:46 | INFO  | Successfully wrote ClusterShell configuration 2026-02-02 03:23:47.793053 | orchestrator | [master b062b2a] 2026-02-02-03-23 2026-02-02 03:23:47.793063 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-02 03:23:50.321490 | orchestrator | 2026-02-02 03:23:50 | INFO  | Task 489f79aa-08fb-47f8-9c8b-cea5e4b7a647 (ceph-create-lvm-devices) was prepared for execution. 2026-02-02 03:23:50.321561 | orchestrator | 2026-02-02 03:23:50 | INFO  | It takes a moment until task 489f79aa-08fb-47f8-9c8b-cea5e4b7a647 (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-02 03:24:02.917710 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-02 03:24:02.917863 | orchestrator | 2.16.14 2026-02-02 03:24:02.917906 | orchestrator | 2026-02-02 03:24:02.917920 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-02 03:24:02.917934 | orchestrator | 2026-02-02 03:24:02.917945 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-02 03:24:02.917957 | orchestrator | Monday 02 February 2026 03:23:55 +0000 (0:00:00.355) 0:00:00.355 ******* 2026-02-02 03:24:02.917968 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 03:24:02.917979 | orchestrator | 2026-02-02 03:24:02.917990 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-02 03:24:02.918159 | orchestrator | Monday 02 February 2026 03:23:55 +0000 (0:00:00.254) 0:00:00.610 ******* 2026-02-02 03:24:02.918172 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:24:02.918183 | orchestrator | 2026-02-02 03:24:02.918195 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.918206 | orchestrator | Monday 02 February 2026 03:23:55 +0000 (0:00:00.243) 0:00:00.853 ******* 2026-02-02 03:24:02.918217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-02 03:24:02.918230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-02 03:24:02.918260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-02 03:24:02.918273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-02 03:24:02.918286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-02 03:24:02.918297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-02 03:24:02.918309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-02 03:24:02.918321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-02 03:24:02.918332 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-02 03:24:02.918343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-02 03:24:02.918377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-02 03:24:02.918389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-02 03:24:02.918400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-02 03:24:02.918411 | orchestrator | 2026-02-02 03:24:02.918445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.918457 | orchestrator | Monday 02 February 2026 03:23:56 +0000 (0:00:00.547) 0:00:01.401 ******* 2026-02-02 03:24:02.918468 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.918479 | orchestrator | 2026-02-02 03:24:02.918491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.918502 | orchestrator | Monday 02 February 2026 03:23:56 +0000 (0:00:00.214) 0:00:01.615 ******* 2026-02-02 03:24:02.918514 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.918525 | orchestrator | 2026-02-02 03:24:02.918536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.918548 | orchestrator | Monday 02 February 2026 03:23:56 +0000 (0:00:00.206) 0:00:01.822 ******* 2026-02-02 03:24:02.918559 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.918569 | orchestrator | 2026-02-02 03:24:02.918581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.918593 | orchestrator | Monday 02 February 2026 03:23:56 +0000 (0:00:00.230) 0:00:02.053 ******* 2026-02-02 03:24:02.918604 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.918614 | orchestrator | 2026-02-02 03:24:02.918623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.918633 | orchestrator | Monday 02 February 2026 03:23:56 +0000 (0:00:00.211) 0:00:02.264 ******* 2026-02-02 03:24:02.918643 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.918652 | orchestrator | 2026-02-02 03:24:02.918662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.918672 | orchestrator | Monday 02 February 2026 03:23:57 +0000 (0:00:00.234) 0:00:02.499 ******* 2026-02-02 03:24:02.918682 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.918691 | orchestrator | 2026-02-02 03:24:02.918701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.918710 | orchestrator | Monday 02 February 2026 03:23:57 +0000 (0:00:00.195) 0:00:02.694 ******* 2026-02-02 03:24:02.918720 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.918729 | orchestrator | 2026-02-02 03:24:02.918739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.918748 | orchestrator | Monday 02 February 2026 03:23:57 +0000 (0:00:00.223) 0:00:02.918 ******* 2026-02-02 03:24:02.918758 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.918767 | orchestrator | 2026-02-02 03:24:02.918777 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.918787 | orchestrator | Monday 02 February 2026 03:23:57 +0000 (0:00:00.211) 0:00:03.129 ******* 2026-02-02 03:24:02.918797 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58) 2026-02-02 03:24:02.918808 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58) 2026-02-02 03:24:02.918817 | orchestrator | 2026-02-02 03:24:02.918827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.918856 | orchestrator | Monday 02 February 2026 03:23:58 +0000 (0:00:00.438) 0:00:03.568 ******* 2026-02-02 03:24:02.918866 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4) 2026-02-02 03:24:02.918887 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4) 2026-02-02 03:24:02.918898 | orchestrator | 2026-02-02 03:24:02.918921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.918939 | orchestrator | Monday 02 February 2026 03:23:58 +0000 (0:00:00.670) 0:00:04.238 ******* 2026-02-02 03:24:02.918949 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc) 2026-02-02 03:24:02.918959 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc) 2026-02-02 03:24:02.918968 | orchestrator | 2026-02-02 03:24:02.918978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.918987 | orchestrator | Monday 02 February 2026 03:23:59 +0000 (0:00:00.725) 0:00:04.964 ******* 2026-02-02 03:24:02.919014 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6) 2026-02-02 03:24:02.919045 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6) 2026-02-02 03:24:02.919062 | orchestrator | 2026-02-02 03:24:02.919072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:02.919083 | orchestrator | Monday 02 February 2026 03:24:00 +0000 (0:00:00.923) 0:00:05.887 ******* 2026-02-02 03:24:02.919093 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-02 03:24:02.919102 | orchestrator | 2026-02-02 03:24:02.919112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:02.919122 | orchestrator | Monday 02 February 2026 03:24:00 +0000 (0:00:00.343) 0:00:06.230 ******* 2026-02-02 03:24:02.919131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-02 03:24:02.919141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-02 03:24:02.919151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-02 03:24:02.919160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-02 03:24:02.919170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-02 03:24:02.919179 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-02 03:24:02.919189 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-02 03:24:02.919199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-02 03:24:02.919208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-02 03:24:02.919218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-02 03:24:02.919241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-02 03:24:02.919251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-02 03:24:02.919261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-02 03:24:02.919271 | orchestrator | 2026-02-02 03:24:02.919281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:02.919290 | orchestrator | Monday 02 February 2026 03:24:01 +0000 (0:00:00.454) 0:00:06.685 ******* 2026-02-02 03:24:02.919300 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.919310 | orchestrator | 2026-02-02 03:24:02.919320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:02.919330 | orchestrator | Monday 02 February 2026 03:24:01 +0000 (0:00:00.208) 0:00:06.894 ******* 2026-02-02 03:24:02.919340 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.919349 | orchestrator | 2026-02-02 03:24:02.919359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:02.919379 | orchestrator | Monday 02 February 2026 03:24:01 +0000 (0:00:00.226) 0:00:07.121 ******* 2026-02-02 03:24:02.919389 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.919407 | orchestrator | 2026-02-02 03:24:02.919417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:02.919427 | orchestrator | Monday 02 February 2026 03:24:02 +0000 (0:00:00.226) 0:00:07.347 ******* 2026-02-02 03:24:02.919437 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.919447 | orchestrator | 2026-02-02 03:24:02.919457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:02.919466 | orchestrator | Monday 02 February 2026 03:24:02 +0000 (0:00:00.217) 0:00:07.564 ******* 2026-02-02 03:24:02.919476 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.919486 | orchestrator | 2026-02-02 03:24:02.919496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:02.919506 | orchestrator | Monday 02 February 2026 03:24:02 +0000 (0:00:00.193) 0:00:07.758 ******* 2026-02-02 03:24:02.919516 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.919525 | orchestrator | 2026-02-02 03:24:02.919535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:02.919545 | orchestrator | Monday 02 February 2026 03:24:02 +0000 (0:00:00.214) 0:00:07.972 ******* 2026-02-02 03:24:02.919555 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:02.919565 | orchestrator | 2026-02-02 03:24:02.919581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:11.250664 | orchestrator | Monday 02 February 2026 03:24:02 +0000 (0:00:00.217) 0:00:08.190 ******* 2026-02-02 03:24:11.250827 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.250837 | orchestrator | 2026-02-02 03:24:11.250844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:11.250851 | orchestrator | Monday 02 February 2026 03:24:03 +0000 (0:00:00.684) 0:00:08.874 ******* 2026-02-02 03:24:11.250857 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-02 03:24:11.250863 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-02 03:24:11.250869 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-02 03:24:11.250875 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-02 03:24:11.250880 | orchestrator | 2026-02-02 03:24:11.250886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:11.250892 | orchestrator | Monday 02 February 2026 03:24:04 +0000 (0:00:00.710) 0:00:09.585 ******* 2026-02-02 03:24:11.250898 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.250903 | orchestrator | 2026-02-02 03:24:11.250909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:11.250914 | orchestrator | Monday 02 February 2026 03:24:04 +0000 (0:00:00.211) 0:00:09.796 ******* 2026-02-02 03:24:11.250920 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.250925 | orchestrator | 2026-02-02 03:24:11.250943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:11.250949 | orchestrator | Monday 02 February 2026 03:24:04 +0000 (0:00:00.227) 0:00:10.024 ******* 2026-02-02 03:24:11.250955 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.250960 | orchestrator | 2026-02-02 03:24:11.250966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:11.250971 | orchestrator | Monday 02 February 2026 03:24:04 +0000 (0:00:00.212) 0:00:10.237 ******* 2026-02-02 03:24:11.250977 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.250982 | orchestrator | 2026-02-02 03:24:11.251023 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-02 03:24:11.251029 | orchestrator | Monday 02 February 2026 03:24:05 +0000 (0:00:00.212) 0:00:10.450 ******* 2026-02-02 03:24:11.251048 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.251054 | orchestrator | 2026-02-02 03:24:11.251060 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-02 03:24:11.251065 | orchestrator | Monday 02 February 2026 03:24:05 +0000 (0:00:00.167) 0:00:10.617 ******* 2026-02-02 03:24:11.251072 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b8f5a57-fc4d-5c4a-8869-764dca19b379'}}) 2026-02-02 03:24:11.251095 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'af42a967-eb71-546a-abb0-a5185990ed2a'}}) 2026-02-02 03:24:11.251101 | orchestrator | 2026-02-02 03:24:11.251107 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-02 03:24:11.251112 | orchestrator | Monday 02 February 2026 03:24:05 +0000 (0:00:00.200) 0:00:10.818 ******* 2026-02-02 03:24:11.251119 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'}) 2026-02-02 03:24:11.251126 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'}) 2026-02-02 03:24:11.251131 | orchestrator | 2026-02-02 03:24:11.251144 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-02 03:24:11.251157 | orchestrator | Monday 02 February 2026 03:24:07 +0000 (0:00:01.956) 0:00:12.775 ******* 2026-02-02 03:24:11.251163 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:11.251170 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:11.251175 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.251181 | orchestrator | 2026-02-02 03:24:11.251186 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-02 03:24:11.251192 | orchestrator | Monday 02 February 2026 03:24:07 +0000 (0:00:00.151) 0:00:12.926 ******* 2026-02-02 03:24:11.251197 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'}) 2026-02-02 03:24:11.251203 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'}) 2026-02-02 03:24:11.251209 | orchestrator | 2026-02-02 03:24:11.251215 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-02 03:24:11.251221 | orchestrator | Monday 02 February 2026 03:24:09 +0000 (0:00:01.511) 0:00:14.438 ******* 2026-02-02 03:24:11.251228 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:11.251234 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:11.251240 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.251247 | orchestrator | 2026-02-02 03:24:11.251253 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-02 03:24:11.251260 | orchestrator | Monday 02 February 2026 03:24:09 +0000 (0:00:00.172) 0:00:14.611 ******* 2026-02-02 03:24:11.251289 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.251303 | orchestrator | 2026-02-02 03:24:11.251312 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-02 03:24:11.251321 | orchestrator | Monday 02 February 2026 03:24:09 +0000 (0:00:00.396) 0:00:15.007 ******* 2026-02-02 03:24:11.251330 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:11.251339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:11.251348 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.251356 | orchestrator | 2026-02-02 03:24:11.251365 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-02 03:24:11.251373 | orchestrator | Monday 02 February 2026 03:24:09 +0000 (0:00:00.176) 0:00:15.184 ******* 2026-02-02 03:24:11.251389 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.251398 | orchestrator | 2026-02-02 03:24:11.251406 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-02 03:24:11.251415 | orchestrator | Monday 02 February 2026 03:24:10 +0000 (0:00:00.149) 0:00:15.333 ******* 2026-02-02 03:24:11.251430 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:11.251440 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:11.251449 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.251459 | orchestrator | 2026-02-02 03:24:11.251469 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-02 03:24:11.251478 | orchestrator | Monday 02 February 2026 03:24:10 +0000 (0:00:00.162) 0:00:15.496 ******* 2026-02-02 03:24:11.251489 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.251495 | orchestrator | 2026-02-02 03:24:11.251501 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-02 03:24:11.251541 | orchestrator | Monday 02 February 2026 03:24:10 +0000 (0:00:00.135) 0:00:15.632 ******* 2026-02-02 03:24:11.251548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:11.251554 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:11.251560 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.251567 | orchestrator | 2026-02-02 03:24:11.251573 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-02 03:24:11.251579 | orchestrator | Monday 02 February 2026 03:24:10 +0000 (0:00:00.162) 0:00:15.794 ******* 2026-02-02 03:24:11.251586 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:24:11.251593 | orchestrator | 2026-02-02 03:24:11.251600 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-02 03:24:11.251606 | orchestrator | Monday 02 February 2026 03:24:10 +0000 (0:00:00.132) 0:00:15.927 ******* 2026-02-02 03:24:11.251613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:11.251618 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:11.251624 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.251629 | orchestrator | 2026-02-02 03:24:11.251635 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-02 03:24:11.251640 | orchestrator | Monday 02 February 2026 03:24:10 +0000 (0:00:00.150) 0:00:16.078 ******* 2026-02-02 03:24:11.251645 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:11.251651 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:11.251656 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.251662 | orchestrator | 2026-02-02 03:24:11.251667 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-02 03:24:11.251673 | orchestrator | Monday 02 February 2026 03:24:10 +0000 (0:00:00.161) 0:00:16.240 ******* 2026-02-02 03:24:11.251678 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:11.251684 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:11.251695 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.251700 | orchestrator | 2026-02-02 03:24:11.251706 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-02 03:24:11.251711 | orchestrator | Monday 02 February 2026 03:24:11 +0000 (0:00:00.154) 0:00:16.394 ******* 2026-02-02 03:24:11.251716 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:11.251723 | orchestrator | 2026-02-02 03:24:11.251732 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-02 03:24:11.251751 | orchestrator | Monday 02 February 2026 03:24:11 +0000 (0:00:00.129) 0:00:16.524 ******* 2026-02-02 03:24:18.138356 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138445 | orchestrator | 2026-02-02 03:24:18.138452 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-02 03:24:18.138458 | orchestrator | Monday 02 February 2026 03:24:11 +0000 (0:00:00.145) 0:00:16.669 ******* 2026-02-02 03:24:18.138462 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138467 | orchestrator | 2026-02-02 03:24:18.138471 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-02 03:24:18.138476 | orchestrator | Monday 02 February 2026 03:24:11 +0000 (0:00:00.391) 0:00:17.060 ******* 2026-02-02 03:24:18.138480 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 03:24:18.138484 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-02 03:24:18.138489 | orchestrator | } 2026-02-02 03:24:18.138493 | orchestrator | 2026-02-02 03:24:18.138497 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-02 03:24:18.138501 | orchestrator | Monday 02 February 2026 03:24:11 +0000 (0:00:00.144) 0:00:17.205 ******* 2026-02-02 03:24:18.138505 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 03:24:18.138509 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-02 03:24:18.138512 | orchestrator | } 2026-02-02 03:24:18.138516 | orchestrator | 2026-02-02 03:24:18.138520 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-02 03:24:18.138535 | orchestrator | Monday 02 February 2026 03:24:12 +0000 (0:00:00.138) 0:00:17.344 ******* 2026-02-02 03:24:18.138539 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 03:24:18.138543 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-02 03:24:18.138547 | orchestrator | } 2026-02-02 03:24:18.138551 | orchestrator | 2026-02-02 03:24:18.138555 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-02 03:24:18.138559 | orchestrator | Monday 02 February 2026 03:24:12 +0000 (0:00:00.155) 0:00:17.499 ******* 2026-02-02 03:24:18.138562 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:24:18.138567 | orchestrator | 2026-02-02 03:24:18.138570 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-02 03:24:18.138574 | orchestrator | Monday 02 February 2026 03:24:12 +0000 (0:00:00.669) 0:00:18.169 ******* 2026-02-02 03:24:18.138578 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:24:18.138582 | orchestrator | 2026-02-02 03:24:18.138586 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-02 03:24:18.138589 | orchestrator | Monday 02 February 2026 03:24:13 +0000 (0:00:00.529) 0:00:18.698 ******* 2026-02-02 03:24:18.138593 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:24:18.138597 | orchestrator | 2026-02-02 03:24:18.138601 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-02 03:24:18.138605 | orchestrator | Monday 02 February 2026 03:24:13 +0000 (0:00:00.513) 0:00:19.211 ******* 2026-02-02 03:24:18.138609 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:24:18.138613 | orchestrator | 2026-02-02 03:24:18.138617 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-02 03:24:18.138621 | orchestrator | Monday 02 February 2026 03:24:14 +0000 (0:00:00.150) 0:00:19.362 ******* 2026-02-02 03:24:18.138625 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138629 | orchestrator | 2026-02-02 03:24:18.138633 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-02 03:24:18.138650 | orchestrator | Monday 02 February 2026 03:24:14 +0000 (0:00:00.117) 0:00:19.479 ******* 2026-02-02 03:24:18.138654 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138658 | orchestrator | 2026-02-02 03:24:18.138662 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-02 03:24:18.138666 | orchestrator | Monday 02 February 2026 03:24:14 +0000 (0:00:00.132) 0:00:19.612 ******* 2026-02-02 03:24:18.138670 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 03:24:18.138674 | orchestrator |  "vgs_report": { 2026-02-02 03:24:18.138678 | orchestrator |  "vg": [] 2026-02-02 03:24:18.138682 | orchestrator |  } 2026-02-02 03:24:18.138686 | orchestrator | } 2026-02-02 03:24:18.138690 | orchestrator | 2026-02-02 03:24:18.138694 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-02 03:24:18.138698 | orchestrator | Monday 02 February 2026 03:24:14 +0000 (0:00:00.160) 0:00:19.772 ******* 2026-02-02 03:24:18.138702 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138705 | orchestrator | 2026-02-02 03:24:18.138709 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-02 03:24:18.138713 | orchestrator | Monday 02 February 2026 03:24:14 +0000 (0:00:00.151) 0:00:19.924 ******* 2026-02-02 03:24:18.138717 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138721 | orchestrator | 2026-02-02 03:24:18.138724 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-02 03:24:18.138728 | orchestrator | Monday 02 February 2026 03:24:15 +0000 (0:00:00.376) 0:00:20.300 ******* 2026-02-02 03:24:18.138732 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138736 | orchestrator | 2026-02-02 03:24:18.138740 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-02 03:24:18.138743 | orchestrator | Monday 02 February 2026 03:24:15 +0000 (0:00:00.149) 0:00:20.450 ******* 2026-02-02 03:24:18.138747 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138751 | orchestrator | 2026-02-02 03:24:18.138755 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-02 03:24:18.138759 | orchestrator | Monday 02 February 2026 03:24:15 +0000 (0:00:00.190) 0:00:20.640 ******* 2026-02-02 03:24:18.138762 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138766 | orchestrator | 2026-02-02 03:24:18.138770 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-02 03:24:18.138783 | orchestrator | Monday 02 February 2026 03:24:15 +0000 (0:00:00.138) 0:00:20.779 ******* 2026-02-02 03:24:18.138787 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138797 | orchestrator | 2026-02-02 03:24:18.138801 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-02 03:24:18.138804 | orchestrator | Monday 02 February 2026 03:24:15 +0000 (0:00:00.153) 0:00:20.933 ******* 2026-02-02 03:24:18.138828 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138832 | orchestrator | 2026-02-02 03:24:18.138836 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-02 03:24:18.138840 | orchestrator | Monday 02 February 2026 03:24:15 +0000 (0:00:00.139) 0:00:21.072 ******* 2026-02-02 03:24:18.138853 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138857 | orchestrator | 2026-02-02 03:24:18.138861 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-02 03:24:18.138865 | orchestrator | Monday 02 February 2026 03:24:15 +0000 (0:00:00.151) 0:00:21.223 ******* 2026-02-02 03:24:18.138869 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138873 | orchestrator | 2026-02-02 03:24:18.138877 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-02 03:24:18.138880 | orchestrator | Monday 02 February 2026 03:24:16 +0000 (0:00:00.145) 0:00:21.369 ******* 2026-02-02 03:24:18.138884 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138889 | orchestrator | 2026-02-02 03:24:18.138893 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-02 03:24:18.138898 | orchestrator | Monday 02 February 2026 03:24:16 +0000 (0:00:00.144) 0:00:21.513 ******* 2026-02-02 03:24:18.138906 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138911 | orchestrator | 2026-02-02 03:24:18.138915 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-02 03:24:18.138920 | orchestrator | Monday 02 February 2026 03:24:16 +0000 (0:00:00.156) 0:00:21.670 ******* 2026-02-02 03:24:18.138924 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138928 | orchestrator | 2026-02-02 03:24:18.138938 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-02 03:24:18.138944 | orchestrator | Monday 02 February 2026 03:24:16 +0000 (0:00:00.158) 0:00:21.829 ******* 2026-02-02 03:24:18.138950 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.138956 | orchestrator | 2026-02-02 03:24:18.138962 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-02 03:24:18.138968 | orchestrator | Monday 02 February 2026 03:24:16 +0000 (0:00:00.155) 0:00:21.984 ******* 2026-02-02 03:24:18.138974 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.139001 | orchestrator | 2026-02-02 03:24:18.139007 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-02 03:24:18.139015 | orchestrator | Monday 02 February 2026 03:24:17 +0000 (0:00:00.413) 0:00:22.398 ******* 2026-02-02 03:24:18.139020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:18.139025 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:18.139029 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.139033 | orchestrator | 2026-02-02 03:24:18.139037 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-02 03:24:18.139041 | orchestrator | Monday 02 February 2026 03:24:17 +0000 (0:00:00.164) 0:00:22.563 ******* 2026-02-02 03:24:18.139045 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:18.139049 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:18.139053 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.139056 | orchestrator | 2026-02-02 03:24:18.139060 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-02 03:24:18.139064 | orchestrator | Monday 02 February 2026 03:24:17 +0000 (0:00:00.165) 0:00:22.728 ******* 2026-02-02 03:24:18.139069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:18.139075 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:18.139081 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.139088 | orchestrator | 2026-02-02 03:24:18.139093 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-02 03:24:18.139099 | orchestrator | Monday 02 February 2026 03:24:17 +0000 (0:00:00.171) 0:00:22.899 ******* 2026-02-02 03:24:18.139105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:18.139111 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:18.139117 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.139122 | orchestrator | 2026-02-02 03:24:18.139128 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-02 03:24:18.139134 | orchestrator | Monday 02 February 2026 03:24:17 +0000 (0:00:00.174) 0:00:23.073 ******* 2026-02-02 03:24:18.139145 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:18.139151 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:18.139156 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:18.139162 | orchestrator | 2026-02-02 03:24:18.139168 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-02 03:24:18.139174 | orchestrator | Monday 02 February 2026 03:24:17 +0000 (0:00:00.181) 0:00:23.255 ******* 2026-02-02 03:24:18.139185 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:23.878542 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:23.878684 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:23.878711 | orchestrator | 2026-02-02 03:24:23.878731 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-02 03:24:23.878753 | orchestrator | Monday 02 February 2026 03:24:18 +0000 (0:00:00.154) 0:00:23.410 ******* 2026-02-02 03:24:23.878772 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:23.878791 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:23.878811 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:23.878828 | orchestrator | 2026-02-02 03:24:23.878866 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-02 03:24:23.878885 | orchestrator | Monday 02 February 2026 03:24:18 +0000 (0:00:00.180) 0:00:23.590 ******* 2026-02-02 03:24:23.878904 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:23.878924 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:23.878944 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:23.878963 | orchestrator | 2026-02-02 03:24:23.879013 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-02 03:24:23.879034 | orchestrator | Monday 02 February 2026 03:24:18 +0000 (0:00:00.166) 0:00:23.757 ******* 2026-02-02 03:24:23.879057 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:24:23.879112 | orchestrator | 2026-02-02 03:24:23.879133 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-02 03:24:23.879153 | orchestrator | Monday 02 February 2026 03:24:19 +0000 (0:00:00.532) 0:00:24.290 ******* 2026-02-02 03:24:23.879175 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:24:23.879196 | orchestrator | 2026-02-02 03:24:23.879216 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-02 03:24:23.879237 | orchestrator | Monday 02 February 2026 03:24:19 +0000 (0:00:00.528) 0:00:24.818 ******* 2026-02-02 03:24:23.879258 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:24:23.879278 | orchestrator | 2026-02-02 03:24:23.879299 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-02 03:24:23.879322 | orchestrator | Monday 02 February 2026 03:24:19 +0000 (0:00:00.160) 0:00:24.979 ******* 2026-02-02 03:24:23.879344 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'vg_name': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'}) 2026-02-02 03:24:23.879367 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'vg_name': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'}) 2026-02-02 03:24:23.879427 | orchestrator | 2026-02-02 03:24:23.879450 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-02 03:24:23.879487 | orchestrator | Monday 02 February 2026 03:24:19 +0000 (0:00:00.173) 0:00:25.152 ******* 2026-02-02 03:24:23.879508 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:23.879530 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:23.879552 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:23.879572 | orchestrator | 2026-02-02 03:24:23.879591 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-02 03:24:23.879610 | orchestrator | Monday 02 February 2026 03:24:20 +0000 (0:00:00.423) 0:00:25.575 ******* 2026-02-02 03:24:23.879630 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:23.879649 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:23.879668 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:23.879688 | orchestrator | 2026-02-02 03:24:23.879706 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-02 03:24:23.879726 | orchestrator | Monday 02 February 2026 03:24:20 +0000 (0:00:00.158) 0:00:25.734 ******* 2026-02-02 03:24:23.879745 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 03:24:23.879764 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 03:24:23.879782 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:24:23.879801 | orchestrator | 2026-02-02 03:24:23.879820 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-02 03:24:23.879839 | orchestrator | Monday 02 February 2026 03:24:20 +0000 (0:00:00.163) 0:00:25.898 ******* 2026-02-02 03:24:23.879886 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 03:24:23.879906 | orchestrator |  "lvm_report": { 2026-02-02 03:24:23.879926 | orchestrator |  "lv": [ 2026-02-02 03:24:23.879945 | orchestrator |  { 2026-02-02 03:24:23.879963 | orchestrator |  "lv_name": "osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379", 2026-02-02 03:24:23.880116 | orchestrator |  "vg_name": "ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379" 2026-02-02 03:24:23.880138 | orchestrator |  }, 2026-02-02 03:24:23.880157 | orchestrator |  { 2026-02-02 03:24:23.880175 | orchestrator |  "lv_name": "osd-block-af42a967-eb71-546a-abb0-a5185990ed2a", 2026-02-02 03:24:23.880192 | orchestrator |  "vg_name": "ceph-af42a967-eb71-546a-abb0-a5185990ed2a" 2026-02-02 03:24:23.880210 | orchestrator |  } 2026-02-02 03:24:23.880228 | orchestrator |  ], 2026-02-02 03:24:23.880247 | orchestrator |  "pv": [ 2026-02-02 03:24:23.880264 | orchestrator |  { 2026-02-02 03:24:23.880282 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-02 03:24:23.880302 | orchestrator |  "vg_name": "ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379" 2026-02-02 03:24:23.880319 | orchestrator |  }, 2026-02-02 03:24:23.880335 | orchestrator |  { 2026-02-02 03:24:23.880366 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-02 03:24:23.880383 | orchestrator |  "vg_name": "ceph-af42a967-eb71-546a-abb0-a5185990ed2a" 2026-02-02 03:24:23.880400 | orchestrator |  } 2026-02-02 03:24:23.880417 | orchestrator |  ] 2026-02-02 03:24:23.880434 | orchestrator |  } 2026-02-02 03:24:23.880451 | orchestrator | } 2026-02-02 03:24:23.880490 | orchestrator | 2026-02-02 03:24:23.880509 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-02 03:24:23.880525 | orchestrator | 2026-02-02 03:24:23.880543 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-02 03:24:23.880561 | orchestrator | Monday 02 February 2026 03:24:20 +0000 (0:00:00.309) 0:00:26.208 ******* 2026-02-02 03:24:23.880578 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-02 03:24:23.880596 | orchestrator | 2026-02-02 03:24:23.880613 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-02 03:24:23.880631 | orchestrator | Monday 02 February 2026 03:24:21 +0000 (0:00:00.259) 0:00:26.467 ******* 2026-02-02 03:24:23.880649 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:24:23.880666 | orchestrator | 2026-02-02 03:24:23.880684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:23.880701 | orchestrator | Monday 02 February 2026 03:24:21 +0000 (0:00:00.251) 0:00:26.719 ******* 2026-02-02 03:24:23.880719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-02 03:24:23.880738 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-02 03:24:23.880756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-02 03:24:23.880774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-02 03:24:23.880791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-02 03:24:23.880810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-02 03:24:23.880827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-02 03:24:23.880845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-02 03:24:23.880863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-02 03:24:23.880881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-02 03:24:23.880899 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-02 03:24:23.880916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-02 03:24:23.880933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-02 03:24:23.880951 | orchestrator | 2026-02-02 03:24:23.880970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:23.881019 | orchestrator | Monday 02 February 2026 03:24:21 +0000 (0:00:00.499) 0:00:27.218 ******* 2026-02-02 03:24:23.881038 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:23.881055 | orchestrator | 2026-02-02 03:24:23.881073 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:23.881093 | orchestrator | Monday 02 February 2026 03:24:22 +0000 (0:00:00.224) 0:00:27.442 ******* 2026-02-02 03:24:23.881111 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:23.881130 | orchestrator | 2026-02-02 03:24:23.881148 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:23.881166 | orchestrator | Monday 02 February 2026 03:24:22 +0000 (0:00:00.781) 0:00:28.224 ******* 2026-02-02 03:24:23.881185 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:23.881203 | orchestrator | 2026-02-02 03:24:23.881222 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:23.881240 | orchestrator | Monday 02 February 2026 03:24:23 +0000 (0:00:00.234) 0:00:28.458 ******* 2026-02-02 03:24:23.881257 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:23.881274 | orchestrator | 2026-02-02 03:24:23.881292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:23.881309 | orchestrator | Monday 02 February 2026 03:24:23 +0000 (0:00:00.197) 0:00:28.655 ******* 2026-02-02 03:24:23.881345 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:23.881365 | orchestrator | 2026-02-02 03:24:23.881383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:23.881402 | orchestrator | Monday 02 February 2026 03:24:23 +0000 (0:00:00.227) 0:00:28.883 ******* 2026-02-02 03:24:23.881420 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:23.881438 | orchestrator | 2026-02-02 03:24:23.881481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:35.615189 | orchestrator | Monday 02 February 2026 03:24:23 +0000 (0:00:00.268) 0:00:29.151 ******* 2026-02-02 03:24:35.615289 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.615301 | orchestrator | 2026-02-02 03:24:35.615310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:35.615318 | orchestrator | Monday 02 February 2026 03:24:24 +0000 (0:00:00.229) 0:00:29.381 ******* 2026-02-02 03:24:35.615326 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.615334 | orchestrator | 2026-02-02 03:24:35.615341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:35.615349 | orchestrator | Monday 02 February 2026 03:24:24 +0000 (0:00:00.267) 0:00:29.648 ******* 2026-02-02 03:24:35.615356 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111) 2026-02-02 03:24:35.615365 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111) 2026-02-02 03:24:35.615372 | orchestrator | 2026-02-02 03:24:35.615394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:35.615402 | orchestrator | Monday 02 February 2026 03:24:24 +0000 (0:00:00.503) 0:00:30.151 ******* 2026-02-02 03:24:35.615409 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5) 2026-02-02 03:24:35.615416 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5) 2026-02-02 03:24:35.615424 | orchestrator | 2026-02-02 03:24:35.615431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:35.615438 | orchestrator | Monday 02 February 2026 03:24:25 +0000 (0:00:00.445) 0:00:30.597 ******* 2026-02-02 03:24:35.615445 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28) 2026-02-02 03:24:35.615453 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28) 2026-02-02 03:24:35.615460 | orchestrator | 2026-02-02 03:24:35.615467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:35.615475 | orchestrator | Monday 02 February 2026 03:24:26 +0000 (0:00:00.711) 0:00:31.309 ******* 2026-02-02 03:24:35.615482 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012) 2026-02-02 03:24:35.615489 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012) 2026-02-02 03:24:35.615497 | orchestrator | 2026-02-02 03:24:35.615504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:35.615513 | orchestrator | Monday 02 February 2026 03:24:27 +0000 (0:00:01.012) 0:00:32.321 ******* 2026-02-02 03:24:35.615526 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-02 03:24:35.615538 | orchestrator | 2026-02-02 03:24:35.615549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.615560 | orchestrator | Monday 02 February 2026 03:24:27 +0000 (0:00:00.368) 0:00:32.689 ******* 2026-02-02 03:24:35.615571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-02 03:24:35.615584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-02 03:24:35.615595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-02 03:24:35.615633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-02 03:24:35.615645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-02 03:24:35.615657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-02 03:24:35.615669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-02 03:24:35.615680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-02 03:24:35.615692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-02 03:24:35.615705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-02 03:24:35.615744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-02 03:24:35.615757 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-02 03:24:35.615769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-02 03:24:35.615778 | orchestrator | 2026-02-02 03:24:35.615786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.615795 | orchestrator | Monday 02 February 2026 03:24:27 +0000 (0:00:00.479) 0:00:33.169 ******* 2026-02-02 03:24:35.615803 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.615811 | orchestrator | 2026-02-02 03:24:35.615820 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.615828 | orchestrator | Monday 02 February 2026 03:24:28 +0000 (0:00:00.227) 0:00:33.396 ******* 2026-02-02 03:24:35.615836 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.615844 | orchestrator | 2026-02-02 03:24:35.615853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.615862 | orchestrator | Monday 02 February 2026 03:24:28 +0000 (0:00:00.234) 0:00:33.631 ******* 2026-02-02 03:24:35.615870 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.615878 | orchestrator | 2026-02-02 03:24:35.615904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.615921 | orchestrator | Monday 02 February 2026 03:24:28 +0000 (0:00:00.199) 0:00:33.831 ******* 2026-02-02 03:24:35.615940 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.615951 | orchestrator | 2026-02-02 03:24:35.616021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.616037 | orchestrator | Monday 02 February 2026 03:24:28 +0000 (0:00:00.233) 0:00:34.064 ******* 2026-02-02 03:24:35.616075 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.616185 | orchestrator | 2026-02-02 03:24:35.616195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.616204 | orchestrator | Monday 02 February 2026 03:24:29 +0000 (0:00:00.226) 0:00:34.290 ******* 2026-02-02 03:24:35.616211 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.616218 | orchestrator | 2026-02-02 03:24:35.616226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.616233 | orchestrator | Monday 02 February 2026 03:24:29 +0000 (0:00:00.236) 0:00:34.527 ******* 2026-02-02 03:24:35.616251 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.616259 | orchestrator | 2026-02-02 03:24:35.616270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.616282 | orchestrator | Monday 02 February 2026 03:24:29 +0000 (0:00:00.221) 0:00:34.749 ******* 2026-02-02 03:24:35.616290 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.616297 | orchestrator | 2026-02-02 03:24:35.616322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.616330 | orchestrator | Monday 02 February 2026 03:24:30 +0000 (0:00:00.746) 0:00:35.495 ******* 2026-02-02 03:24:35.616337 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-02 03:24:35.616435 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-02 03:24:35.616444 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-02 03:24:35.616451 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-02 03:24:35.616459 | orchestrator | 2026-02-02 03:24:35.616466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.616473 | orchestrator | Monday 02 February 2026 03:24:30 +0000 (0:00:00.760) 0:00:36.256 ******* 2026-02-02 03:24:35.616480 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.616487 | orchestrator | 2026-02-02 03:24:35.616494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.616502 | orchestrator | Monday 02 February 2026 03:24:31 +0000 (0:00:00.219) 0:00:36.475 ******* 2026-02-02 03:24:35.616509 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.616516 | orchestrator | 2026-02-02 03:24:35.616523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.616530 | orchestrator | Monday 02 February 2026 03:24:31 +0000 (0:00:00.216) 0:00:36.691 ******* 2026-02-02 03:24:35.616553 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.616561 | orchestrator | 2026-02-02 03:24:35.616570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:35.616582 | orchestrator | Monday 02 February 2026 03:24:31 +0000 (0:00:00.220) 0:00:36.912 ******* 2026-02-02 03:24:35.616590 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.616597 | orchestrator | 2026-02-02 03:24:35.616604 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-02 03:24:35.616612 | orchestrator | Monday 02 February 2026 03:24:31 +0000 (0:00:00.244) 0:00:37.157 ******* 2026-02-02 03:24:35.616623 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.616633 | orchestrator | 2026-02-02 03:24:35.616640 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-02 03:24:35.616647 | orchestrator | Monday 02 February 2026 03:24:32 +0000 (0:00:00.179) 0:00:37.337 ******* 2026-02-02 03:24:35.616655 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}}) 2026-02-02 03:24:35.616663 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '106e1245-4ea8-54a2-9b27-5c2b147fae19'}}) 2026-02-02 03:24:35.616670 | orchestrator | 2026-02-02 03:24:35.616677 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-02 03:24:35.616685 | orchestrator | Monday 02 February 2026 03:24:32 +0000 (0:00:00.205) 0:00:37.542 ******* 2026-02-02 03:24:35.616693 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}) 2026-02-02 03:24:35.616702 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'}) 2026-02-02 03:24:35.616712 | orchestrator | 2026-02-02 03:24:35.616724 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-02 03:24:35.616742 | orchestrator | Monday 02 February 2026 03:24:34 +0000 (0:00:01.777) 0:00:39.320 ******* 2026-02-02 03:24:35.616755 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:35.616770 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:35.616782 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:35.616793 | orchestrator | 2026-02-02 03:24:35.616804 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-02 03:24:35.616913 | orchestrator | Monday 02 February 2026 03:24:34 +0000 (0:00:00.149) 0:00:39.469 ******* 2026-02-02 03:24:35.616952 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}) 2026-02-02 03:24:35.617154 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'}) 2026-02-02 03:24:41.804624 | orchestrator | 2026-02-02 03:24:41.804715 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-02 03:24:41.804726 | orchestrator | Monday 02 February 2026 03:24:35 +0000 (0:00:01.415) 0:00:40.884 ******* 2026-02-02 03:24:41.804734 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:41.804744 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:41.804751 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.804759 | orchestrator | 2026-02-02 03:24:41.804782 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-02 03:24:41.804789 | orchestrator | Monday 02 February 2026 03:24:36 +0000 (0:00:00.438) 0:00:41.323 ******* 2026-02-02 03:24:41.804796 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.804803 | orchestrator | 2026-02-02 03:24:41.804810 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-02 03:24:41.804817 | orchestrator | Monday 02 February 2026 03:24:36 +0000 (0:00:00.144) 0:00:41.467 ******* 2026-02-02 03:24:41.804824 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:41.804831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:41.804838 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.804844 | orchestrator | 2026-02-02 03:24:41.804851 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-02 03:24:41.804858 | orchestrator | Monday 02 February 2026 03:24:36 +0000 (0:00:00.174) 0:00:41.642 ******* 2026-02-02 03:24:41.804865 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.804872 | orchestrator | 2026-02-02 03:24:41.804879 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-02 03:24:41.804886 | orchestrator | Monday 02 February 2026 03:24:36 +0000 (0:00:00.174) 0:00:41.816 ******* 2026-02-02 03:24:41.804892 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:41.804898 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:41.804903 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.804910 | orchestrator | 2026-02-02 03:24:41.804917 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-02 03:24:41.804924 | orchestrator | Monday 02 February 2026 03:24:36 +0000 (0:00:00.156) 0:00:41.973 ******* 2026-02-02 03:24:41.804930 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.804935 | orchestrator | 2026-02-02 03:24:41.804942 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-02 03:24:41.804949 | orchestrator | Monday 02 February 2026 03:24:36 +0000 (0:00:00.153) 0:00:42.127 ******* 2026-02-02 03:24:41.804956 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:41.804998 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:41.805006 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.805013 | orchestrator | 2026-02-02 03:24:41.805020 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-02 03:24:41.805048 | orchestrator | Monday 02 February 2026 03:24:37 +0000 (0:00:00.183) 0:00:42.310 ******* 2026-02-02 03:24:41.805056 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:24:41.805064 | orchestrator | 2026-02-02 03:24:41.805071 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-02 03:24:41.805078 | orchestrator | Monday 02 February 2026 03:24:37 +0000 (0:00:00.149) 0:00:42.460 ******* 2026-02-02 03:24:41.805085 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:41.805091 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:41.805098 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.805105 | orchestrator | 2026-02-02 03:24:41.805112 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-02 03:24:41.805119 | orchestrator | Monday 02 February 2026 03:24:37 +0000 (0:00:00.171) 0:00:42.631 ******* 2026-02-02 03:24:41.805125 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:41.805132 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:41.805139 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.805146 | orchestrator | 2026-02-02 03:24:41.805153 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-02 03:24:41.805173 | orchestrator | Monday 02 February 2026 03:24:37 +0000 (0:00:00.183) 0:00:42.815 ******* 2026-02-02 03:24:41.805180 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:41.805187 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:41.805194 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.805200 | orchestrator | 2026-02-02 03:24:41.805207 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-02 03:24:41.805214 | orchestrator | Monday 02 February 2026 03:24:37 +0000 (0:00:00.170) 0:00:42.986 ******* 2026-02-02 03:24:41.805225 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.805233 | orchestrator | 2026-02-02 03:24:41.805239 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-02 03:24:41.805246 | orchestrator | Monday 02 February 2026 03:24:38 +0000 (0:00:00.360) 0:00:43.347 ******* 2026-02-02 03:24:41.805253 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.805260 | orchestrator | 2026-02-02 03:24:41.805266 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-02 03:24:41.805273 | orchestrator | Monday 02 February 2026 03:24:38 +0000 (0:00:00.148) 0:00:43.495 ******* 2026-02-02 03:24:41.805280 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.805287 | orchestrator | 2026-02-02 03:24:41.805294 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-02 03:24:41.805300 | orchestrator | Monday 02 February 2026 03:24:38 +0000 (0:00:00.137) 0:00:43.632 ******* 2026-02-02 03:24:41.805307 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 03:24:41.805314 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-02 03:24:41.805321 | orchestrator | } 2026-02-02 03:24:41.805328 | orchestrator | 2026-02-02 03:24:41.805335 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-02 03:24:41.805342 | orchestrator | Monday 02 February 2026 03:24:38 +0000 (0:00:00.149) 0:00:43.782 ******* 2026-02-02 03:24:41.805349 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 03:24:41.805355 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-02 03:24:41.805369 | orchestrator | } 2026-02-02 03:24:41.805376 | orchestrator | 2026-02-02 03:24:41.805383 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-02 03:24:41.805389 | orchestrator | Monday 02 February 2026 03:24:38 +0000 (0:00:00.141) 0:00:43.924 ******* 2026-02-02 03:24:41.805396 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 03:24:41.805403 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-02 03:24:41.805410 | orchestrator | } 2026-02-02 03:24:41.805416 | orchestrator | 2026-02-02 03:24:41.805423 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-02 03:24:41.805430 | orchestrator | Monday 02 February 2026 03:24:38 +0000 (0:00:00.169) 0:00:44.094 ******* 2026-02-02 03:24:41.805437 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:24:41.805444 | orchestrator | 2026-02-02 03:24:41.805451 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-02 03:24:41.805458 | orchestrator | Monday 02 February 2026 03:24:39 +0000 (0:00:00.547) 0:00:44.641 ******* 2026-02-02 03:24:41.805465 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:24:41.805472 | orchestrator | 2026-02-02 03:24:41.805478 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-02 03:24:41.805485 | orchestrator | Monday 02 February 2026 03:24:39 +0000 (0:00:00.508) 0:00:45.150 ******* 2026-02-02 03:24:41.805492 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:24:41.805499 | orchestrator | 2026-02-02 03:24:41.805505 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-02 03:24:41.805512 | orchestrator | Monday 02 February 2026 03:24:40 +0000 (0:00:00.551) 0:00:45.701 ******* 2026-02-02 03:24:41.805519 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:24:41.805526 | orchestrator | 2026-02-02 03:24:41.805533 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-02 03:24:41.805539 | orchestrator | Monday 02 February 2026 03:24:40 +0000 (0:00:00.163) 0:00:45.865 ******* 2026-02-02 03:24:41.805546 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.805553 | orchestrator | 2026-02-02 03:24:41.805560 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-02 03:24:41.805567 | orchestrator | Monday 02 February 2026 03:24:40 +0000 (0:00:00.108) 0:00:45.973 ******* 2026-02-02 03:24:41.805573 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.805580 | orchestrator | 2026-02-02 03:24:41.805587 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-02 03:24:41.805594 | orchestrator | Monday 02 February 2026 03:24:41 +0000 (0:00:00.346) 0:00:46.320 ******* 2026-02-02 03:24:41.805601 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 03:24:41.805608 | orchestrator |  "vgs_report": { 2026-02-02 03:24:41.805615 | orchestrator |  "vg": [] 2026-02-02 03:24:41.805622 | orchestrator |  } 2026-02-02 03:24:41.805629 | orchestrator | } 2026-02-02 03:24:41.805636 | orchestrator | 2026-02-02 03:24:41.805643 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-02 03:24:41.805650 | orchestrator | Monday 02 February 2026 03:24:41 +0000 (0:00:00.161) 0:00:46.482 ******* 2026-02-02 03:24:41.805657 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.805663 | orchestrator | 2026-02-02 03:24:41.805670 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-02 03:24:41.805677 | orchestrator | Monday 02 February 2026 03:24:41 +0000 (0:00:00.160) 0:00:46.642 ******* 2026-02-02 03:24:41.805684 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.805690 | orchestrator | 2026-02-02 03:24:41.805697 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-02 03:24:41.805704 | orchestrator | Monday 02 February 2026 03:24:41 +0000 (0:00:00.156) 0:00:46.798 ******* 2026-02-02 03:24:41.805711 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.805718 | orchestrator | 2026-02-02 03:24:41.805724 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-02 03:24:41.805731 | orchestrator | Monday 02 February 2026 03:24:41 +0000 (0:00:00.131) 0:00:46.930 ******* 2026-02-02 03:24:41.805743 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:41.805750 | orchestrator | 2026-02-02 03:24:41.805760 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-02 03:24:46.466228 | orchestrator | Monday 02 February 2026 03:24:41 +0000 (0:00:00.149) 0:00:47.080 ******* 2026-02-02 03:24:46.466286 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466292 | orchestrator | 2026-02-02 03:24:46.466297 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-02 03:24:46.466301 | orchestrator | Monday 02 February 2026 03:24:41 +0000 (0:00:00.146) 0:00:47.226 ******* 2026-02-02 03:24:46.466305 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466309 | orchestrator | 2026-02-02 03:24:46.466313 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-02 03:24:46.466318 | orchestrator | Monday 02 February 2026 03:24:42 +0000 (0:00:00.142) 0:00:47.369 ******* 2026-02-02 03:24:46.466355 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466363 | orchestrator | 2026-02-02 03:24:46.466378 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-02 03:24:46.466385 | orchestrator | Monday 02 February 2026 03:24:42 +0000 (0:00:00.154) 0:00:47.523 ******* 2026-02-02 03:24:46.466391 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466397 | orchestrator | 2026-02-02 03:24:46.466404 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-02 03:24:46.466411 | orchestrator | Monday 02 February 2026 03:24:42 +0000 (0:00:00.154) 0:00:47.678 ******* 2026-02-02 03:24:46.466417 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466424 | orchestrator | 2026-02-02 03:24:46.466428 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-02 03:24:46.466432 | orchestrator | Monday 02 February 2026 03:24:42 +0000 (0:00:00.133) 0:00:47.811 ******* 2026-02-02 03:24:46.466435 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466439 | orchestrator | 2026-02-02 03:24:46.466443 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-02 03:24:46.466447 | orchestrator | Monday 02 February 2026 03:24:42 +0000 (0:00:00.294) 0:00:48.106 ******* 2026-02-02 03:24:46.466451 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466454 | orchestrator | 2026-02-02 03:24:46.466458 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-02 03:24:46.466462 | orchestrator | Monday 02 February 2026 03:24:42 +0000 (0:00:00.139) 0:00:48.245 ******* 2026-02-02 03:24:46.466466 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466470 | orchestrator | 2026-02-02 03:24:46.466473 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-02 03:24:46.466477 | orchestrator | Monday 02 February 2026 03:24:43 +0000 (0:00:00.136) 0:00:48.381 ******* 2026-02-02 03:24:46.466481 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466485 | orchestrator | 2026-02-02 03:24:46.466488 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-02 03:24:46.466492 | orchestrator | Monday 02 February 2026 03:24:43 +0000 (0:00:00.138) 0:00:48.519 ******* 2026-02-02 03:24:46.466496 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466500 | orchestrator | 2026-02-02 03:24:46.466503 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-02 03:24:46.466507 | orchestrator | Monday 02 February 2026 03:24:43 +0000 (0:00:00.124) 0:00:48.644 ******* 2026-02-02 03:24:46.466511 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:46.466516 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:46.466520 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466523 | orchestrator | 2026-02-02 03:24:46.466527 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-02 03:24:46.466541 | orchestrator | Monday 02 February 2026 03:24:43 +0000 (0:00:00.140) 0:00:48.785 ******* 2026-02-02 03:24:46.466545 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:46.466549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:46.466552 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466556 | orchestrator | 2026-02-02 03:24:46.466560 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-02 03:24:46.466564 | orchestrator | Monday 02 February 2026 03:24:43 +0000 (0:00:00.167) 0:00:48.952 ******* 2026-02-02 03:24:46.466568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:46.466574 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:46.466580 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466587 | orchestrator | 2026-02-02 03:24:46.466594 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-02 03:24:46.466600 | orchestrator | Monday 02 February 2026 03:24:43 +0000 (0:00:00.137) 0:00:49.089 ******* 2026-02-02 03:24:46.466607 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:46.466613 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:46.466619 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466622 | orchestrator | 2026-02-02 03:24:46.466634 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-02 03:24:46.466638 | orchestrator | Monday 02 February 2026 03:24:43 +0000 (0:00:00.136) 0:00:49.226 ******* 2026-02-02 03:24:46.466642 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:46.466645 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:46.466649 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466653 | orchestrator | 2026-02-02 03:24:46.466659 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-02 03:24:46.466663 | orchestrator | Monday 02 February 2026 03:24:44 +0000 (0:00:00.161) 0:00:49.388 ******* 2026-02-02 03:24:46.466667 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:46.466671 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:46.466675 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466678 | orchestrator | 2026-02-02 03:24:46.466682 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-02 03:24:46.466686 | orchestrator | Monday 02 February 2026 03:24:44 +0000 (0:00:00.140) 0:00:49.528 ******* 2026-02-02 03:24:46.466690 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:46.466693 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:46.466697 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466704 | orchestrator | 2026-02-02 03:24:46.466708 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-02 03:24:46.466712 | orchestrator | Monday 02 February 2026 03:24:44 +0000 (0:00:00.323) 0:00:49.852 ******* 2026-02-02 03:24:46.466716 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:46.466720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:46.466723 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466727 | orchestrator | 2026-02-02 03:24:46.466731 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-02 03:24:46.466735 | orchestrator | Monday 02 February 2026 03:24:44 +0000 (0:00:00.137) 0:00:49.989 ******* 2026-02-02 03:24:46.466738 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:24:46.466742 | orchestrator | 2026-02-02 03:24:46.466746 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-02 03:24:46.466750 | orchestrator | Monday 02 February 2026 03:24:45 +0000 (0:00:00.540) 0:00:50.529 ******* 2026-02-02 03:24:46.466753 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:24:46.466757 | orchestrator | 2026-02-02 03:24:46.466761 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-02 03:24:46.466765 | orchestrator | Monday 02 February 2026 03:24:45 +0000 (0:00:00.526) 0:00:51.055 ******* 2026-02-02 03:24:46.466768 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:24:46.466772 | orchestrator | 2026-02-02 03:24:46.466776 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-02 03:24:46.466780 | orchestrator | Monday 02 February 2026 03:24:45 +0000 (0:00:00.192) 0:00:51.248 ******* 2026-02-02 03:24:46.466784 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'vg_name': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'}) 2026-02-02 03:24:46.466788 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'vg_name': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}) 2026-02-02 03:24:46.466792 | orchestrator | 2026-02-02 03:24:46.466796 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-02 03:24:46.466800 | orchestrator | Monday 02 February 2026 03:24:46 +0000 (0:00:00.160) 0:00:51.409 ******* 2026-02-02 03:24:46.466803 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:46.466807 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:46.466811 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:46.466815 | orchestrator | 2026-02-02 03:24:46.466819 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-02 03:24:46.466823 | orchestrator | Monday 02 February 2026 03:24:46 +0000 (0:00:00.163) 0:00:51.573 ******* 2026-02-02 03:24:46.466827 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:46.466834 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:52.931590 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:52.931726 | orchestrator | 2026-02-02 03:24:52.931740 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-02 03:24:52.931751 | orchestrator | Monday 02 February 2026 03:24:46 +0000 (0:00:00.169) 0:00:51.742 ******* 2026-02-02 03:24:52.931760 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 03:24:52.931805 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 03:24:52.931814 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:24:52.931822 | orchestrator | 2026-02-02 03:24:52.931830 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-02 03:24:52.931838 | orchestrator | Monday 02 February 2026 03:24:46 +0000 (0:00:00.153) 0:00:51.896 ******* 2026-02-02 03:24:52.931846 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 03:24:52.931854 | orchestrator |  "lvm_report": { 2026-02-02 03:24:52.931863 | orchestrator |  "lv": [ 2026-02-02 03:24:52.931872 | orchestrator |  { 2026-02-02 03:24:52.931880 | orchestrator |  "lv_name": "osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19", 2026-02-02 03:24:52.931889 | orchestrator |  "vg_name": "ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19" 2026-02-02 03:24:52.931897 | orchestrator |  }, 2026-02-02 03:24:52.931904 | orchestrator |  { 2026-02-02 03:24:52.931912 | orchestrator |  "lv_name": "osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89", 2026-02-02 03:24:52.931920 | orchestrator |  "vg_name": "ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89" 2026-02-02 03:24:52.931928 | orchestrator |  } 2026-02-02 03:24:52.931936 | orchestrator |  ], 2026-02-02 03:24:52.931944 | orchestrator |  "pv": [ 2026-02-02 03:24:52.931951 | orchestrator |  { 2026-02-02 03:24:52.932006 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-02 03:24:52.932015 | orchestrator |  "vg_name": "ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89" 2026-02-02 03:24:52.932024 | orchestrator |  }, 2026-02-02 03:24:52.932032 | orchestrator |  { 2026-02-02 03:24:52.932041 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-02 03:24:52.932050 | orchestrator |  "vg_name": "ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19" 2026-02-02 03:24:52.932058 | orchestrator |  } 2026-02-02 03:24:52.932067 | orchestrator |  ] 2026-02-02 03:24:52.932075 | orchestrator |  } 2026-02-02 03:24:52.932084 | orchestrator | } 2026-02-02 03:24:52.932093 | orchestrator | 2026-02-02 03:24:52.932101 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-02 03:24:52.932109 | orchestrator | 2026-02-02 03:24:52.932118 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-02 03:24:52.932126 | orchestrator | Monday 02 February 2026 03:24:46 +0000 (0:00:00.314) 0:00:52.210 ******* 2026-02-02 03:24:52.932135 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-02 03:24:52.932144 | orchestrator | 2026-02-02 03:24:52.932152 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-02 03:24:52.932161 | orchestrator | Monday 02 February 2026 03:24:47 +0000 (0:00:00.586) 0:00:52.797 ******* 2026-02-02 03:24:52.932169 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:24:52.932177 | orchestrator | 2026-02-02 03:24:52.932185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932193 | orchestrator | Monday 02 February 2026 03:24:47 +0000 (0:00:00.247) 0:00:53.045 ******* 2026-02-02 03:24:52.932201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-02 03:24:52.932209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-02 03:24:52.932218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-02 03:24:52.932227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-02 03:24:52.932235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-02 03:24:52.932243 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-02 03:24:52.932252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-02 03:24:52.932267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-02 03:24:52.932275 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-02 03:24:52.932283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-02 03:24:52.932292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-02 03:24:52.932301 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-02 03:24:52.932309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-02 03:24:52.932318 | orchestrator | 2026-02-02 03:24:52.932326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932334 | orchestrator | Monday 02 February 2026 03:24:48 +0000 (0:00:00.373) 0:00:53.418 ******* 2026-02-02 03:24:52.932343 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:24:52.932351 | orchestrator | 2026-02-02 03:24:52.932360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932368 | orchestrator | Monday 02 February 2026 03:24:48 +0000 (0:00:00.179) 0:00:53.597 ******* 2026-02-02 03:24:52.932377 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:24:52.932385 | orchestrator | 2026-02-02 03:24:52.932394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932417 | orchestrator | Monday 02 February 2026 03:24:48 +0000 (0:00:00.179) 0:00:53.777 ******* 2026-02-02 03:24:52.932425 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:24:52.932434 | orchestrator | 2026-02-02 03:24:52.932443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932451 | orchestrator | Monday 02 February 2026 03:24:48 +0000 (0:00:00.241) 0:00:54.018 ******* 2026-02-02 03:24:52.932460 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:24:52.932468 | orchestrator | 2026-02-02 03:24:52.932477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932486 | orchestrator | Monday 02 February 2026 03:24:48 +0000 (0:00:00.186) 0:00:54.205 ******* 2026-02-02 03:24:52.932494 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:24:52.932502 | orchestrator | 2026-02-02 03:24:52.932511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932519 | orchestrator | Monday 02 February 2026 03:24:49 +0000 (0:00:00.183) 0:00:54.388 ******* 2026-02-02 03:24:52.932527 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:24:52.932535 | orchestrator | 2026-02-02 03:24:52.932544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932552 | orchestrator | Monday 02 February 2026 03:24:49 +0000 (0:00:00.204) 0:00:54.592 ******* 2026-02-02 03:24:52.932561 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:24:52.932569 | orchestrator | 2026-02-02 03:24:52.932577 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932586 | orchestrator | Monday 02 February 2026 03:24:49 +0000 (0:00:00.211) 0:00:54.804 ******* 2026-02-02 03:24:52.932594 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:24:52.932602 | orchestrator | 2026-02-02 03:24:52.932611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932619 | orchestrator | Monday 02 February 2026 03:24:50 +0000 (0:00:00.513) 0:00:55.318 ******* 2026-02-02 03:24:52.932627 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73) 2026-02-02 03:24:52.932636 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73) 2026-02-02 03:24:52.932645 | orchestrator | 2026-02-02 03:24:52.932653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932660 | orchestrator | Monday 02 February 2026 03:24:50 +0000 (0:00:00.415) 0:00:55.734 ******* 2026-02-02 03:24:52.932700 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40) 2026-02-02 03:24:52.932717 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40) 2026-02-02 03:24:52.932726 | orchestrator | 2026-02-02 03:24:52.932733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932742 | orchestrator | Monday 02 February 2026 03:24:50 +0000 (0:00:00.422) 0:00:56.157 ******* 2026-02-02 03:24:52.932750 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b) 2026-02-02 03:24:52.932758 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b) 2026-02-02 03:24:52.932766 | orchestrator | 2026-02-02 03:24:52.932774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932782 | orchestrator | Monday 02 February 2026 03:24:51 +0000 (0:00:00.439) 0:00:56.597 ******* 2026-02-02 03:24:52.932790 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359) 2026-02-02 03:24:52.932799 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359) 2026-02-02 03:24:52.932807 | orchestrator | 2026-02-02 03:24:52.932815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-02 03:24:52.932823 | orchestrator | Monday 02 February 2026 03:24:52 +0000 (0:00:00.722) 0:00:57.319 ******* 2026-02-02 03:24:52.932831 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-02 03:24:52.932839 | orchestrator | 2026-02-02 03:24:52.932846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:24:52.932854 | orchestrator | Monday 02 February 2026 03:24:52 +0000 (0:00:00.439) 0:00:57.759 ******* 2026-02-02 03:24:52.932862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-02 03:24:52.932870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-02 03:24:52.932878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-02 03:24:52.932886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-02 03:24:52.932894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-02 03:24:52.932901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-02 03:24:52.932909 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-02 03:24:52.932917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-02 03:24:52.932925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-02 03:24:52.932933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-02 03:24:52.932941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-02 03:24:52.932969 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-02 03:25:01.938603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-02 03:25:01.938703 | orchestrator | 2026-02-02 03:25:01.938718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:25:01.938728 | orchestrator | Monday 02 February 2026 03:24:52 +0000 (0:00:00.442) 0:00:58.202 ******* 2026-02-02 03:25:01.938737 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.938746 | orchestrator | 2026-02-02 03:25:01.938756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:25:01.938776 | orchestrator | Monday 02 February 2026 03:24:53 +0000 (0:00:00.190) 0:00:58.393 ******* 2026-02-02 03:25:01.938785 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.938811 | orchestrator | 2026-02-02 03:25:01.938821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:25:01.938839 | orchestrator | Monday 02 February 2026 03:24:53 +0000 (0:00:00.229) 0:00:58.622 ******* 2026-02-02 03:25:01.938848 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.938857 | orchestrator | 2026-02-02 03:25:01.938865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:25:01.938874 | orchestrator | Monday 02 February 2026 03:24:53 +0000 (0:00:00.227) 0:00:58.850 ******* 2026-02-02 03:25:01.938883 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.938892 | orchestrator | 2026-02-02 03:25:01.938900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:25:01.938909 | orchestrator | Monday 02 February 2026 03:24:53 +0000 (0:00:00.221) 0:00:59.071 ******* 2026-02-02 03:25:01.938918 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.938926 | orchestrator | 2026-02-02 03:25:01.938935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:25:01.938944 | orchestrator | Monday 02 February 2026 03:24:54 +0000 (0:00:00.724) 0:00:59.796 ******* 2026-02-02 03:25:01.938995 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.939006 | orchestrator | 2026-02-02 03:25:01.939015 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:25:01.939023 | orchestrator | Monday 02 February 2026 03:24:54 +0000 (0:00:00.232) 0:01:00.029 ******* 2026-02-02 03:25:01.939032 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.939041 | orchestrator | 2026-02-02 03:25:01.939050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:25:01.939059 | orchestrator | Monday 02 February 2026 03:24:54 +0000 (0:00:00.222) 0:01:00.251 ******* 2026-02-02 03:25:01.939068 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.939077 | orchestrator | 2026-02-02 03:25:01.939086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:25:01.939095 | orchestrator | Monday 02 February 2026 03:24:55 +0000 (0:00:00.206) 0:01:00.457 ******* 2026-02-02 03:25:01.939104 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-02 03:25:01.939114 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-02 03:25:01.939123 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-02 03:25:01.939131 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-02 03:25:01.939140 | orchestrator | 2026-02-02 03:25:01.939149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:25:01.939158 | orchestrator | Monday 02 February 2026 03:24:55 +0000 (0:00:00.680) 0:01:01.138 ******* 2026-02-02 03:25:01.939170 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.939180 | orchestrator | 2026-02-02 03:25:01.939192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:25:01.939203 | orchestrator | Monday 02 February 2026 03:24:56 +0000 (0:00:00.222) 0:01:01.361 ******* 2026-02-02 03:25:01.939214 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.939224 | orchestrator | 2026-02-02 03:25:01.939234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:25:01.939245 | orchestrator | Monday 02 February 2026 03:24:56 +0000 (0:00:00.225) 0:01:01.586 ******* 2026-02-02 03:25:01.939255 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.939266 | orchestrator | 2026-02-02 03:25:01.939276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-02 03:25:01.939287 | orchestrator | Monday 02 February 2026 03:24:56 +0000 (0:00:00.258) 0:01:01.844 ******* 2026-02-02 03:25:01.939297 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.939308 | orchestrator | 2026-02-02 03:25:01.939318 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-02 03:25:01.939328 | orchestrator | Monday 02 February 2026 03:24:56 +0000 (0:00:00.210) 0:01:02.054 ******* 2026-02-02 03:25:01.939340 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.939350 | orchestrator | 2026-02-02 03:25:01.939368 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-02 03:25:01.939379 | orchestrator | Monday 02 February 2026 03:24:56 +0000 (0:00:00.137) 0:01:02.192 ******* 2026-02-02 03:25:01.939392 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd54a22ee-8606-5662-853b-b39e232caa8f'}}) 2026-02-02 03:25:01.939403 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e4fc6918-1796-5a48-9994-5f31e91196e6'}}) 2026-02-02 03:25:01.939413 | orchestrator | 2026-02-02 03:25:01.939424 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-02 03:25:01.939434 | orchestrator | Monday 02 February 2026 03:24:57 +0000 (0:00:00.228) 0:01:02.420 ******* 2026-02-02 03:25:01.939445 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'}) 2026-02-02 03:25:01.939456 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'}) 2026-02-02 03:25:01.939467 | orchestrator | 2026-02-02 03:25:01.939478 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-02 03:25:01.939504 | orchestrator | Monday 02 February 2026 03:24:58 +0000 (0:00:01.786) 0:01:04.207 ******* 2026-02-02 03:25:01.939515 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:01.939527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:01.939537 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.939548 | orchestrator | 2026-02-02 03:25:01.939563 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-02 03:25:01.939573 | orchestrator | Monday 02 February 2026 03:24:59 +0000 (0:00:00.418) 0:01:04.626 ******* 2026-02-02 03:25:01.939582 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'}) 2026-02-02 03:25:01.939591 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'}) 2026-02-02 03:25:01.939600 | orchestrator | 2026-02-02 03:25:01.939609 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-02 03:25:01.939624 | orchestrator | Monday 02 February 2026 03:25:00 +0000 (0:00:01.287) 0:01:05.913 ******* 2026-02-02 03:25:01.939639 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:01.939654 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:01.939669 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.939684 | orchestrator | 2026-02-02 03:25:01.939698 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-02 03:25:01.939712 | orchestrator | Monday 02 February 2026 03:25:00 +0000 (0:00:00.175) 0:01:06.088 ******* 2026-02-02 03:25:01.939727 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.939741 | orchestrator | 2026-02-02 03:25:01.939755 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-02 03:25:01.939769 | orchestrator | Monday 02 February 2026 03:25:00 +0000 (0:00:00.150) 0:01:06.239 ******* 2026-02-02 03:25:01.939782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:01.939796 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:01.939819 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.939832 | orchestrator | 2026-02-02 03:25:01.939847 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-02 03:25:01.939863 | orchestrator | Monday 02 February 2026 03:25:01 +0000 (0:00:00.146) 0:01:06.385 ******* 2026-02-02 03:25:01.939878 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.939893 | orchestrator | 2026-02-02 03:25:01.939908 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-02 03:25:01.939922 | orchestrator | Monday 02 February 2026 03:25:01 +0000 (0:00:00.121) 0:01:06.507 ******* 2026-02-02 03:25:01.939936 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:01.939971 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:01.939987 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.940001 | orchestrator | 2026-02-02 03:25:01.940014 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-02 03:25:01.940027 | orchestrator | Monday 02 February 2026 03:25:01 +0000 (0:00:00.169) 0:01:06.677 ******* 2026-02-02 03:25:01.940041 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.940055 | orchestrator | 2026-02-02 03:25:01.940069 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-02 03:25:01.940084 | orchestrator | Monday 02 February 2026 03:25:01 +0000 (0:00:00.127) 0:01:06.805 ******* 2026-02-02 03:25:01.940097 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:01.940112 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:01.940128 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:01.940141 | orchestrator | 2026-02-02 03:25:01.940154 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-02 03:25:01.940168 | orchestrator | Monday 02 February 2026 03:25:01 +0000 (0:00:00.155) 0:01:06.960 ******* 2026-02-02 03:25:01.940182 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:25:01.940196 | orchestrator | 2026-02-02 03:25:01.940210 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-02 03:25:01.940224 | orchestrator | Monday 02 February 2026 03:25:01 +0000 (0:00:00.111) 0:01:07.072 ******* 2026-02-02 03:25:01.940252 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:08.516815 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:08.516909 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.516922 | orchestrator | 2026-02-02 03:25:08.516932 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-02 03:25:08.516943 | orchestrator | Monday 02 February 2026 03:25:01 +0000 (0:00:00.141) 0:01:07.213 ******* 2026-02-02 03:25:08.517025 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:08.517034 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:08.517046 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.517059 | orchestrator | 2026-02-02 03:25:08.517071 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-02 03:25:08.517082 | orchestrator | Monday 02 February 2026 03:25:02 +0000 (0:00:00.145) 0:01:07.359 ******* 2026-02-02 03:25:08.517123 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:08.517140 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:08.517152 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.517164 | orchestrator | 2026-02-02 03:25:08.517177 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-02 03:25:08.517194 | orchestrator | Monday 02 February 2026 03:25:02 +0000 (0:00:00.307) 0:01:07.667 ******* 2026-02-02 03:25:08.517207 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.517219 | orchestrator | 2026-02-02 03:25:08.517232 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-02 03:25:08.517246 | orchestrator | Monday 02 February 2026 03:25:02 +0000 (0:00:00.152) 0:01:07.819 ******* 2026-02-02 03:25:08.517258 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.517272 | orchestrator | 2026-02-02 03:25:08.517286 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-02 03:25:08.517299 | orchestrator | Monday 02 February 2026 03:25:02 +0000 (0:00:00.118) 0:01:07.938 ******* 2026-02-02 03:25:08.517312 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.517324 | orchestrator | 2026-02-02 03:25:08.517332 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-02 03:25:08.517339 | orchestrator | Monday 02 February 2026 03:25:02 +0000 (0:00:00.137) 0:01:08.075 ******* 2026-02-02 03:25:08.517347 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 03:25:08.517356 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-02 03:25:08.517366 | orchestrator | } 2026-02-02 03:25:08.517376 | orchestrator | 2026-02-02 03:25:08.517385 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-02 03:25:08.517394 | orchestrator | Monday 02 February 2026 03:25:02 +0000 (0:00:00.159) 0:01:08.235 ******* 2026-02-02 03:25:08.517403 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 03:25:08.517413 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-02 03:25:08.517422 | orchestrator | } 2026-02-02 03:25:08.517431 | orchestrator | 2026-02-02 03:25:08.517441 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-02 03:25:08.517450 | orchestrator | Monday 02 February 2026 03:25:03 +0000 (0:00:00.181) 0:01:08.416 ******* 2026-02-02 03:25:08.517459 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 03:25:08.517469 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-02 03:25:08.517478 | orchestrator | } 2026-02-02 03:25:08.517487 | orchestrator | 2026-02-02 03:25:08.517496 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-02 03:25:08.517506 | orchestrator | Monday 02 February 2026 03:25:03 +0000 (0:00:00.142) 0:01:08.559 ******* 2026-02-02 03:25:08.517516 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:25:08.517525 | orchestrator | 2026-02-02 03:25:08.517534 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-02 03:25:08.517544 | orchestrator | Monday 02 February 2026 03:25:03 +0000 (0:00:00.584) 0:01:09.143 ******* 2026-02-02 03:25:08.517553 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:25:08.517562 | orchestrator | 2026-02-02 03:25:08.517571 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-02 03:25:08.517581 | orchestrator | Monday 02 February 2026 03:25:04 +0000 (0:00:00.518) 0:01:09.661 ******* 2026-02-02 03:25:08.517590 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:25:08.517599 | orchestrator | 2026-02-02 03:25:08.517608 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-02 03:25:08.517618 | orchestrator | Monday 02 February 2026 03:25:04 +0000 (0:00:00.517) 0:01:10.179 ******* 2026-02-02 03:25:08.517627 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:25:08.517635 | orchestrator | 2026-02-02 03:25:08.517645 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-02 03:25:08.517663 | orchestrator | Monday 02 February 2026 03:25:05 +0000 (0:00:00.150) 0:01:10.329 ******* 2026-02-02 03:25:08.517673 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.517682 | orchestrator | 2026-02-02 03:25:08.517691 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-02 03:25:08.517701 | orchestrator | Monday 02 February 2026 03:25:05 +0000 (0:00:00.107) 0:01:10.437 ******* 2026-02-02 03:25:08.517710 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.517720 | orchestrator | 2026-02-02 03:25:08.517729 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-02 03:25:08.517737 | orchestrator | Monday 02 February 2026 03:25:05 +0000 (0:00:00.390) 0:01:10.827 ******* 2026-02-02 03:25:08.517745 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 03:25:08.517753 | orchestrator |  "vgs_report": { 2026-02-02 03:25:08.517762 | orchestrator |  "vg": [] 2026-02-02 03:25:08.517788 | orchestrator |  } 2026-02-02 03:25:08.517797 | orchestrator | } 2026-02-02 03:25:08.517818 | orchestrator | 2026-02-02 03:25:08.517827 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-02 03:25:08.517835 | orchestrator | Monday 02 February 2026 03:25:05 +0000 (0:00:00.143) 0:01:10.971 ******* 2026-02-02 03:25:08.517843 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.517851 | orchestrator | 2026-02-02 03:25:08.517859 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-02 03:25:08.517867 | orchestrator | Monday 02 February 2026 03:25:05 +0000 (0:00:00.148) 0:01:11.119 ******* 2026-02-02 03:25:08.517881 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.517894 | orchestrator | 2026-02-02 03:25:08.517907 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-02 03:25:08.517919 | orchestrator | Monday 02 February 2026 03:25:05 +0000 (0:00:00.144) 0:01:11.263 ******* 2026-02-02 03:25:08.517932 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.517968 | orchestrator | 2026-02-02 03:25:08.517983 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-02 03:25:08.517997 | orchestrator | Monday 02 February 2026 03:25:06 +0000 (0:00:00.151) 0:01:11.415 ******* 2026-02-02 03:25:08.518010 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.518091 | orchestrator | 2026-02-02 03:25:08.518106 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-02 03:25:08.518120 | orchestrator | Monday 02 February 2026 03:25:06 +0000 (0:00:00.157) 0:01:11.573 ******* 2026-02-02 03:25:08.518134 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.518149 | orchestrator | 2026-02-02 03:25:08.518164 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-02 03:25:08.518178 | orchestrator | Monday 02 February 2026 03:25:06 +0000 (0:00:00.146) 0:01:11.719 ******* 2026-02-02 03:25:08.518192 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.518207 | orchestrator | 2026-02-02 03:25:08.518221 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-02 03:25:08.518236 | orchestrator | Monday 02 February 2026 03:25:06 +0000 (0:00:00.141) 0:01:11.861 ******* 2026-02-02 03:25:08.518251 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.518266 | orchestrator | 2026-02-02 03:25:08.518281 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-02 03:25:08.518296 | orchestrator | Monday 02 February 2026 03:25:06 +0000 (0:00:00.139) 0:01:12.000 ******* 2026-02-02 03:25:08.518311 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.518337 | orchestrator | 2026-02-02 03:25:08.518353 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-02 03:25:08.518369 | orchestrator | Monday 02 February 2026 03:25:06 +0000 (0:00:00.158) 0:01:12.159 ******* 2026-02-02 03:25:08.518383 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.518399 | orchestrator | 2026-02-02 03:25:08.518419 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-02 03:25:08.518434 | orchestrator | Monday 02 February 2026 03:25:07 +0000 (0:00:00.157) 0:01:12.316 ******* 2026-02-02 03:25:08.518462 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.518477 | orchestrator | 2026-02-02 03:25:08.518493 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-02 03:25:08.518507 | orchestrator | Monday 02 February 2026 03:25:07 +0000 (0:00:00.129) 0:01:12.446 ******* 2026-02-02 03:25:08.518522 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.518537 | orchestrator | 2026-02-02 03:25:08.518552 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-02 03:25:08.518567 | orchestrator | Monday 02 February 2026 03:25:07 +0000 (0:00:00.394) 0:01:12.841 ******* 2026-02-02 03:25:08.518582 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.518595 | orchestrator | 2026-02-02 03:25:08.518605 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-02 03:25:08.518614 | orchestrator | Monday 02 February 2026 03:25:07 +0000 (0:00:00.155) 0:01:12.996 ******* 2026-02-02 03:25:08.518622 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.518631 | orchestrator | 2026-02-02 03:25:08.518639 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-02 03:25:08.518648 | orchestrator | Monday 02 February 2026 03:25:07 +0000 (0:00:00.139) 0:01:13.136 ******* 2026-02-02 03:25:08.518657 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.518665 | orchestrator | 2026-02-02 03:25:08.518674 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-02 03:25:08.518683 | orchestrator | Monday 02 February 2026 03:25:07 +0000 (0:00:00.132) 0:01:13.268 ******* 2026-02-02 03:25:08.518692 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:08.518702 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:08.518710 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.518719 | orchestrator | 2026-02-02 03:25:08.518728 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-02 03:25:08.518736 | orchestrator | Monday 02 February 2026 03:25:08 +0000 (0:00:00.172) 0:01:13.441 ******* 2026-02-02 03:25:08.518745 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:08.518754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:08.518763 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:08.518771 | orchestrator | 2026-02-02 03:25:08.518780 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-02 03:25:08.518789 | orchestrator | Monday 02 February 2026 03:25:08 +0000 (0:00:00.188) 0:01:13.629 ******* 2026-02-02 03:25:08.518811 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:11.616826 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:11.616935 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:11.617012 | orchestrator | 2026-02-02 03:25:11.617050 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-02 03:25:11.617069 | orchestrator | Monday 02 February 2026 03:25:08 +0000 (0:00:00.164) 0:01:13.793 ******* 2026-02-02 03:25:11.617085 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:11.617101 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:11.617139 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:11.617156 | orchestrator | 2026-02-02 03:25:11.617173 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-02 03:25:11.617190 | orchestrator | Monday 02 February 2026 03:25:08 +0000 (0:00:00.153) 0:01:13.947 ******* 2026-02-02 03:25:11.617207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:11.617225 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:11.617243 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:11.617259 | orchestrator | 2026-02-02 03:25:11.617299 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-02 03:25:11.617315 | orchestrator | Monday 02 February 2026 03:25:08 +0000 (0:00:00.171) 0:01:14.118 ******* 2026-02-02 03:25:11.617331 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:11.617348 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:11.617366 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:11.617382 | orchestrator | 2026-02-02 03:25:11.617399 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-02 03:25:11.617416 | orchestrator | Monday 02 February 2026 03:25:09 +0000 (0:00:00.173) 0:01:14.291 ******* 2026-02-02 03:25:11.617432 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:11.617450 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:11.617467 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:11.617484 | orchestrator | 2026-02-02 03:25:11.617501 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-02 03:25:11.617520 | orchestrator | Monday 02 February 2026 03:25:09 +0000 (0:00:00.225) 0:01:14.516 ******* 2026-02-02 03:25:11.617537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:11.617553 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:11.617569 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:11.617579 | orchestrator | 2026-02-02 03:25:11.617589 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-02 03:25:11.617598 | orchestrator | Monday 02 February 2026 03:25:09 +0000 (0:00:00.184) 0:01:14.701 ******* 2026-02-02 03:25:11.617608 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:25:11.617619 | orchestrator | 2026-02-02 03:25:11.617628 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-02 03:25:11.617645 | orchestrator | Monday 02 February 2026 03:25:10 +0000 (0:00:00.756) 0:01:15.458 ******* 2026-02-02 03:25:11.617661 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:25:11.617677 | orchestrator | 2026-02-02 03:25:11.617693 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-02 03:25:11.617709 | orchestrator | Monday 02 February 2026 03:25:10 +0000 (0:00:00.500) 0:01:15.958 ******* 2026-02-02 03:25:11.617726 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:25:11.617742 | orchestrator | 2026-02-02 03:25:11.617759 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-02 03:25:11.617775 | orchestrator | Monday 02 February 2026 03:25:10 +0000 (0:00:00.137) 0:01:16.096 ******* 2026-02-02 03:25:11.617796 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'vg_name': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'}) 2026-02-02 03:25:11.617806 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'vg_name': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'}) 2026-02-02 03:25:11.617816 | orchestrator | 2026-02-02 03:25:11.617826 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-02 03:25:11.617837 | orchestrator | Monday 02 February 2026 03:25:10 +0000 (0:00:00.175) 0:01:16.272 ******* 2026-02-02 03:25:11.617877 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:11.617906 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:11.617923 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:11.617939 | orchestrator | 2026-02-02 03:25:11.617981 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-02 03:25:11.617999 | orchestrator | Monday 02 February 2026 03:25:11 +0000 (0:00:00.147) 0:01:16.420 ******* 2026-02-02 03:25:11.618080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:11.618102 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:11.618118 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:11.618137 | orchestrator | 2026-02-02 03:25:11.618153 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-02 03:25:11.618170 | orchestrator | Monday 02 February 2026 03:25:11 +0000 (0:00:00.137) 0:01:16.557 ******* 2026-02-02 03:25:11.618183 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 03:25:11.618193 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 03:25:11.618203 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:11.618212 | orchestrator | 2026-02-02 03:25:11.618222 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-02 03:25:11.618232 | orchestrator | Monday 02 February 2026 03:25:11 +0000 (0:00:00.172) 0:01:16.730 ******* 2026-02-02 03:25:11.618242 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 03:25:11.618252 | orchestrator |  "lvm_report": { 2026-02-02 03:25:11.618263 | orchestrator |  "lv": [ 2026-02-02 03:25:11.618273 | orchestrator |  { 2026-02-02 03:25:11.618283 | orchestrator |  "lv_name": "osd-block-d54a22ee-8606-5662-853b-b39e232caa8f", 2026-02-02 03:25:11.618294 | orchestrator |  "vg_name": "ceph-d54a22ee-8606-5662-853b-b39e232caa8f" 2026-02-02 03:25:11.618303 | orchestrator |  }, 2026-02-02 03:25:11.618313 | orchestrator |  { 2026-02-02 03:25:11.618323 | orchestrator |  "lv_name": "osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6", 2026-02-02 03:25:11.618333 | orchestrator |  "vg_name": "ceph-e4fc6918-1796-5a48-9994-5f31e91196e6" 2026-02-02 03:25:11.618343 | orchestrator |  } 2026-02-02 03:25:11.618353 | orchestrator |  ], 2026-02-02 03:25:11.618363 | orchestrator |  "pv": [ 2026-02-02 03:25:11.618372 | orchestrator |  { 2026-02-02 03:25:11.618382 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-02 03:25:11.618392 | orchestrator |  "vg_name": "ceph-d54a22ee-8606-5662-853b-b39e232caa8f" 2026-02-02 03:25:11.618402 | orchestrator |  }, 2026-02-02 03:25:11.618412 | orchestrator |  { 2026-02-02 03:25:11.618421 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-02 03:25:11.618445 | orchestrator |  "vg_name": "ceph-e4fc6918-1796-5a48-9994-5f31e91196e6" 2026-02-02 03:25:11.618455 | orchestrator |  } 2026-02-02 03:25:11.618465 | orchestrator |  ] 2026-02-02 03:25:11.618475 | orchestrator |  } 2026-02-02 03:25:11.618485 | orchestrator | } 2026-02-02 03:25:11.618495 | orchestrator | 2026-02-02 03:25:11.618505 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:25:11.618515 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-02 03:25:11.618526 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-02 03:25:11.618536 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-02 03:25:11.618546 | orchestrator | 2026-02-02 03:25:11.618564 | orchestrator | 2026-02-02 03:25:11.618580 | orchestrator | 2026-02-02 03:25:11.618597 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:25:11.618613 | orchestrator | Monday 02 February 2026 03:25:11 +0000 (0:00:00.147) 0:01:16.878 ******* 2026-02-02 03:25:11.618630 | orchestrator | =============================================================================== 2026-02-02 03:25:11.618647 | orchestrator | Create block VGs -------------------------------------------------------- 5.52s 2026-02-02 03:25:11.618663 | orchestrator | Create block LVs -------------------------------------------------------- 4.21s 2026-02-02 03:25:11.618680 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.83s 2026-02-02 03:25:11.618697 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.80s 2026-02-02 03:25:11.618713 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.58s 2026-02-02 03:25:11.618730 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2026-02-02 03:25:11.618746 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2026-02-02 03:25:11.618764 | orchestrator | Add known links to the list of available block devices ------------------ 1.42s 2026-02-02 03:25:11.618794 | orchestrator | Add known partitions to the list of available block devices ------------- 1.38s 2026-02-02 03:25:11.884815 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.10s 2026-02-02 03:25:11.884891 | orchestrator | Add known links to the list of available block devices ------------------ 1.01s 2026-02-02 03:25:11.884906 | orchestrator | Add known links to the list of available block devices ------------------ 0.92s 2026-02-02 03:25:11.884931 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.87s 2026-02-02 03:25:11.885004 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.79s 2026-02-02 03:25:11.885019 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2026-02-02 03:25:11.885032 | orchestrator | Print LVM report data --------------------------------------------------- 0.77s 2026-02-02 03:25:11.885044 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-02-02 03:25:11.885053 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-02-02 03:25:11.885060 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-02-02 03:25:11.885067 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.73s 2026-02-02 03:25:24.334485 | orchestrator | 2026-02-02 03:25:24 | INFO  | Task 7ebdacf6-421c-4ade-84ab-695d1186bcd1 (facts) was prepared for execution. 2026-02-02 03:25:24.334590 | orchestrator | 2026-02-02 03:25:24 | INFO  | It takes a moment until task 7ebdacf6-421c-4ade-84ab-695d1186bcd1 (facts) has been started and output is visible here. 2026-02-02 03:25:38.587434 | orchestrator | 2026-02-02 03:25:38.587546 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-02 03:25:38.587598 | orchestrator | 2026-02-02 03:25:38.587610 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-02 03:25:38.587618 | orchestrator | Monday 02 February 2026 03:25:28 +0000 (0:00:00.287) 0:00:00.287 ******* 2026-02-02 03:25:38.587627 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:25:38.587636 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:25:38.587644 | orchestrator | ok: [testbed-manager] 2026-02-02 03:25:38.587652 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:25:38.587660 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:25:38.587668 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:25:38.587676 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:25:38.587684 | orchestrator | 2026-02-02 03:25:38.587692 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-02 03:25:38.587700 | orchestrator | Monday 02 February 2026 03:25:30 +0000 (0:00:01.244) 0:00:01.531 ******* 2026-02-02 03:25:38.587708 | orchestrator | skipping: [testbed-manager] 2026-02-02 03:25:38.587717 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:25:38.587725 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:25:38.587732 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:25:38.587740 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:25:38.587748 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:25:38.587756 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:38.587763 | orchestrator | 2026-02-02 03:25:38.587771 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-02 03:25:38.587779 | orchestrator | 2026-02-02 03:25:38.587787 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 03:25:38.587795 | orchestrator | Monday 02 February 2026 03:25:31 +0000 (0:00:01.573) 0:00:03.105 ******* 2026-02-02 03:25:38.587803 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:25:38.587811 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:25:38.587819 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:25:38.587826 | orchestrator | ok: [testbed-manager] 2026-02-02 03:25:38.587834 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:25:38.587842 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:25:38.587850 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:25:38.587858 | orchestrator | 2026-02-02 03:25:38.587866 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-02 03:25:38.587874 | orchestrator | 2026-02-02 03:25:38.587882 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-02 03:25:38.587890 | orchestrator | Monday 02 February 2026 03:25:37 +0000 (0:00:05.912) 0:00:09.018 ******* 2026-02-02 03:25:38.587898 | orchestrator | skipping: [testbed-manager] 2026-02-02 03:25:38.587905 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:25:38.587913 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:25:38.587921 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:25:38.587961 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:25:38.587990 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:25:38.588002 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:25:38.588015 | orchestrator | 2026-02-02 03:25:38.588038 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:25:38.588053 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:25:38.588068 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:25:38.588081 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:25:38.588117 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:25:38.588131 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:25:38.588186 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:25:38.588201 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:25:38.588216 | orchestrator | 2026-02-02 03:25:38.588226 | orchestrator | 2026-02-02 03:25:38.588234 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:25:38.588257 | orchestrator | Monday 02 February 2026 03:25:38 +0000 (0:00:00.562) 0:00:09.580 ******* 2026-02-02 03:25:38.588265 | orchestrator | =============================================================================== 2026-02-02 03:25:38.588273 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.91s 2026-02-02 03:25:38.588281 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.57s 2026-02-02 03:25:38.588289 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.24s 2026-02-02 03:25:38.588297 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-02-02 03:25:41.108803 | orchestrator | 2026-02-02 03:25:41 | INFO  | Task 90c8fc9e-bd74-4c42-b891-25aa6bb4e242 (ceph) was prepared for execution. 2026-02-02 03:25:41.108904 | orchestrator | 2026-02-02 03:25:41 | INFO  | It takes a moment until task 90c8fc9e-bd74-4c42-b891-25aa6bb4e242 (ceph) has been started and output is visible here. 2026-02-02 03:26:00.523035 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-02 03:26:00.523144 | orchestrator | 2.16.14 2026-02-02 03:26:00.523159 | orchestrator | 2026-02-02 03:26:00.523169 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-02 03:26:00.523176 | orchestrator | 2026-02-02 03:26:00.523181 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 03:26:00.523186 | orchestrator | Monday 02 February 2026 03:25:46 +0000 (0:00:00.837) 0:00:00.837 ******* 2026-02-02 03:26:00.523193 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:26:00.523198 | orchestrator | 2026-02-02 03:26:00.523203 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 03:26:00.523208 | orchestrator | Monday 02 February 2026 03:25:47 +0000 (0:00:01.297) 0:00:02.135 ******* 2026-02-02 03:26:00.523212 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:00.523217 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:00.523222 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:00.523227 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:00.523231 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:00.523236 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:00.523241 | orchestrator | 2026-02-02 03:26:00.523246 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 03:26:00.523251 | orchestrator | Monday 02 February 2026 03:25:49 +0000 (0:00:01.306) 0:00:03.441 ******* 2026-02-02 03:26:00.523255 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:00.523260 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:00.523265 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:00.523269 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:00.523274 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:00.523278 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:00.523283 | orchestrator | 2026-02-02 03:26:00.523288 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 03:26:00.523292 | orchestrator | Monday 02 February 2026 03:25:50 +0000 (0:00:00.859) 0:00:04.301 ******* 2026-02-02 03:26:00.523297 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:00.523302 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:00.523306 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:00.523311 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:00.523334 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:00.523339 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:00.523344 | orchestrator | 2026-02-02 03:26:00.523349 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 03:26:00.523353 | orchestrator | Monday 02 February 2026 03:25:51 +0000 (0:00:01.025) 0:00:05.327 ******* 2026-02-02 03:26:00.523358 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:00.523362 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:00.523367 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:00.523371 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:00.523376 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:00.523381 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:00.523385 | orchestrator | 2026-02-02 03:26:00.523390 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 03:26:00.523394 | orchestrator | Monday 02 February 2026 03:25:52 +0000 (0:00:00.849) 0:00:06.177 ******* 2026-02-02 03:26:00.523399 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:00.523404 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:00.523408 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:00.523413 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:00.523417 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:00.523422 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:00.523426 | orchestrator | 2026-02-02 03:26:00.523431 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 03:26:00.523435 | orchestrator | Monday 02 February 2026 03:25:52 +0000 (0:00:00.622) 0:00:06.799 ******* 2026-02-02 03:26:00.523440 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:00.523444 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:00.523449 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:00.523453 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:00.523458 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:00.523462 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:00.523467 | orchestrator | 2026-02-02 03:26:00.523472 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 03:26:00.523476 | orchestrator | Monday 02 February 2026 03:25:53 +0000 (0:00:00.875) 0:00:07.675 ******* 2026-02-02 03:26:00.523481 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:00.523487 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:00.523491 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:00.523496 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:00.523500 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:00.523505 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:00.523510 | orchestrator | 2026-02-02 03:26:00.523515 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 03:26:00.523520 | orchestrator | Monday 02 February 2026 03:25:54 +0000 (0:00:00.652) 0:00:08.328 ******* 2026-02-02 03:26:00.523525 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:00.523530 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:00.523535 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:00.523540 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:00.523545 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:00.523560 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:00.523566 | orchestrator | 2026-02-02 03:26:00.523570 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 03:26:00.523576 | orchestrator | Monday 02 February 2026 03:25:55 +0000 (0:00:00.898) 0:00:09.226 ******* 2026-02-02 03:26:00.523583 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 03:26:00.523590 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 03:26:00.523597 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 03:26:00.523604 | orchestrator | 2026-02-02 03:26:00.523611 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 03:26:00.523617 | orchestrator | Monday 02 February 2026 03:25:55 +0000 (0:00:00.713) 0:00:09.939 ******* 2026-02-02 03:26:00.523627 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:00.523634 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:00.523641 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:00.523664 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:00.523672 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:00.523679 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:00.523685 | orchestrator | 2026-02-02 03:26:00.523692 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 03:26:00.523699 | orchestrator | Monday 02 February 2026 03:25:56 +0000 (0:00:00.721) 0:00:10.660 ******* 2026-02-02 03:26:00.523705 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 03:26:00.523712 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 03:26:00.523718 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 03:26:00.523724 | orchestrator | 2026-02-02 03:26:00.523730 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 03:26:00.523737 | orchestrator | Monday 02 February 2026 03:25:59 +0000 (0:00:02.529) 0:00:13.190 ******* 2026-02-02 03:26:00.523744 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 03:26:00.523751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 03:26:00.523757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 03:26:00.523763 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:00.523769 | orchestrator | 2026-02-02 03:26:00.523775 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 03:26:00.523782 | orchestrator | Monday 02 February 2026 03:25:59 +0000 (0:00:00.432) 0:00:13.622 ******* 2026-02-02 03:26:00.523791 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 03:26:00.523800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 03:26:00.523806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 03:26:00.523813 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:00.523820 | orchestrator | 2026-02-02 03:26:00.523827 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 03:26:00.523833 | orchestrator | Monday 02 February 2026 03:26:00 +0000 (0:00:00.627) 0:00:14.250 ******* 2026-02-02 03:26:00.523888 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:00.523931 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:00.523939 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:00.523963 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:00.523970 | orchestrator | 2026-02-02 03:26:00.523983 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 03:26:00.523990 | orchestrator | Monday 02 February 2026 03:26:00 +0000 (0:00:00.194) 0:00:14.445 ******* 2026-02-02 03:26:00.524007 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 03:25:57.490141', 'end': '2026-02-02 03:25:57.542549', 'delta': '0:00:00.052408', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 03:26:10.781678 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 03:25:58.086566', 'end': '2026-02-02 03:25:58.136395', 'delta': '0:00:00.049829', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 03:26:10.781858 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 03:25:58.626543', 'end': '2026-02-02 03:25:58.673996', 'delta': '0:00:00.047453', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 03:26:10.781876 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:10.781888 | orchestrator | 2026-02-02 03:26:10.781973 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 03:26:10.781989 | orchestrator | Monday 02 February 2026 03:26:00 +0000 (0:00:00.212) 0:00:14.657 ******* 2026-02-02 03:26:10.781999 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:10.782010 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:10.782071 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:10.782082 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:10.782092 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:10.782101 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:10.782111 | orchestrator | 2026-02-02 03:26:10.782120 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 03:26:10.782130 | orchestrator | Monday 02 February 2026 03:26:01 +0000 (0:00:00.778) 0:00:15.435 ******* 2026-02-02 03:26:10.782139 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:26:10.782149 | orchestrator | 2026-02-02 03:26:10.782159 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 03:26:10.782168 | orchestrator | Monday 02 February 2026 03:26:01 +0000 (0:00:00.606) 0:00:16.041 ******* 2026-02-02 03:26:10.782233 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:10.782243 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:10.782253 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:10.782263 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:10.782273 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:10.782283 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:10.782292 | orchestrator | 2026-02-02 03:26:10.782302 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 03:26:10.782314 | orchestrator | Monday 02 February 2026 03:26:02 +0000 (0:00:00.870) 0:00:16.911 ******* 2026-02-02 03:26:10.782324 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:10.782333 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:10.782343 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:10.782353 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:10.782363 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:10.782373 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:10.782383 | orchestrator | 2026-02-02 03:26:10.782393 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 03:26:10.782405 | orchestrator | Monday 02 February 2026 03:26:04 +0000 (0:00:01.280) 0:00:18.192 ******* 2026-02-02 03:26:10.782414 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:10.782425 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:10.782435 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:10.782446 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:10.782456 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:10.782482 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:10.782491 | orchestrator | 2026-02-02 03:26:10.782500 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 03:26:10.782509 | orchestrator | Monday 02 February 2026 03:26:04 +0000 (0:00:00.635) 0:00:18.827 ******* 2026-02-02 03:26:10.782517 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:10.782525 | orchestrator | 2026-02-02 03:26:10.782534 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 03:26:10.782542 | orchestrator | Monday 02 February 2026 03:26:04 +0000 (0:00:00.140) 0:00:18.967 ******* 2026-02-02 03:26:10.782550 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:10.782558 | orchestrator | 2026-02-02 03:26:10.782567 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 03:26:10.782575 | orchestrator | Monday 02 February 2026 03:26:05 +0000 (0:00:00.283) 0:00:19.251 ******* 2026-02-02 03:26:10.782583 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:10.782591 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:10.782599 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:10.782607 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:10.782615 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:10.782624 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:10.782632 | orchestrator | 2026-02-02 03:26:10.782662 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 03:26:10.782671 | orchestrator | Monday 02 February 2026 03:26:06 +0000 (0:00:00.903) 0:00:20.154 ******* 2026-02-02 03:26:10.782679 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:10.782687 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:10.782695 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:10.782702 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:10.782710 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:10.782718 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:10.782726 | orchestrator | 2026-02-02 03:26:10.782734 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 03:26:10.782742 | orchestrator | Monday 02 February 2026 03:26:06 +0000 (0:00:00.631) 0:00:20.786 ******* 2026-02-02 03:26:10.782750 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:10.782758 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:10.782766 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:10.782783 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:10.782791 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:10.782799 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:10.782807 | orchestrator | 2026-02-02 03:26:10.782815 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 03:26:10.782823 | orchestrator | Monday 02 February 2026 03:26:07 +0000 (0:00:00.913) 0:00:21.699 ******* 2026-02-02 03:26:10.782831 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:10.782839 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:10.782847 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:10.782855 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:10.782862 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:10.782870 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:10.782878 | orchestrator | 2026-02-02 03:26:10.782886 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 03:26:10.782894 | orchestrator | Monday 02 February 2026 03:26:08 +0000 (0:00:00.666) 0:00:22.366 ******* 2026-02-02 03:26:10.782926 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:10.782934 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:10.782942 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:10.782950 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:10.782958 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:10.782965 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:10.782973 | orchestrator | 2026-02-02 03:26:10.782981 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 03:26:10.782989 | orchestrator | Monday 02 February 2026 03:26:09 +0000 (0:00:00.922) 0:00:23.288 ******* 2026-02-02 03:26:10.782997 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:10.783004 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:10.783012 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:10.783020 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:10.783027 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:10.783036 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:10.783043 | orchestrator | 2026-02-02 03:26:10.783051 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 03:26:10.783061 | orchestrator | Monday 02 February 2026 03:26:09 +0000 (0:00:00.643) 0:00:23.932 ******* 2026-02-02 03:26:10.783068 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:10.783076 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:10.783084 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:10.783092 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:10.783099 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:10.783107 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:10.783115 | orchestrator | 2026-02-02 03:26:10.783123 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 03:26:10.783131 | orchestrator | Monday 02 February 2026 03:26:10 +0000 (0:00:00.864) 0:00:24.797 ******* 2026-02-02 03:26:10.783158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379', 'dm-uuid-LVM-2Xx1rXy8ZvvzVeymXUM2Y23jmTeKUn30gyH8a84MHrJn7bcz7phSu8LEA3bm3DqO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:10.783176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a', 'dm-uuid-LVM-nQNI9mGSypmWJN7Kribh0RNL5qLQKFSceYxT4mfzBYfoYiha3ZzoEdYR0rTnnIvK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:10.783201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:10.898845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:10.898989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:10.899003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:10.899012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:10.899019 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:10.899027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:10.899033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:10.899077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:10.899105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89', 'dm-uuid-LVM-bGXwDmNnGJLl15xDO66UDgeGoDbpg8C0HvMSdsO6YcSLb4aDqGATNEcOudg8iQom'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:10.899114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HOxmXw-N5cX-V1Nz-Lu3r-OQk9-N5gG-1syyTi', 'scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4', 'scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:10.899121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19', 'dm-uuid-LVM-7fojGdQjjxzlZ1d67G3lfXV0uQvvNrpG74l8TP6AWG5LY1LTlUkEVjmQPc2hTMkL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:10.899132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yf6lEa-f3nO-iewk-DEDy-Fb6j-Kq2P-dbkgMf', 'scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc', 'scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:10.899148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.136232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6', 'scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.136326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.136343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.136356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.136367 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.136377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.136424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.136435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.136462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.136476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.136490 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:11.136503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AITawh-CkpC-7L3c-Vqqe-GXUP-7eEh-WwcXRH', 'scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5', 'scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.136553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QbZaLy-yUYT-ccut-PcI7-2pGL-9PmJ-6NoPFr', 'scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28', 'scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.136582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012', 'scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.266481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.266569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f', 'dm-uuid-LVM-oyVS0lpzZeiZxxmfRvad67kbexmRBG5IWJAtRWtNBygZ9yUEjcaaQoSOl1TBvsQs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.266581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6', 'dm-uuid-LVM-o4NjfQidgd0d8Dt2ERSF2CVjMcc1iNdF2FL70XUBfeOz8qjNKOcDK13w6fcJ9Hta'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.266590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.266669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.266691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.266699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.266707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.266730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.266738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.266746 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:11.266758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.266787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.266831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qjdzC2-uhmD-TpwQ-o3eu-AERk-xIpn-IuLEqz', 'scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40', 'scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.266855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-etyEN7-O4pu-QliJ-NKxv-0HLx-jIcx-JGZ0d7', 'scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b', 'scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.475201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359', 'scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.475295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.475331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.475344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.475365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.475373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.475381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.475390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.475413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.475422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.475437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.475454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.475463 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:11.475473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.475481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.475495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.619822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.619971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.619987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.619998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.620022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.620034 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:11.620065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.620087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.620099 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:11.620110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.620120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.620135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.620145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.620155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.620170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.620188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.620216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:26:11.970865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.971049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:26:11.971070 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:11.971084 | orchestrator | 2026-02-02 03:26:11.971097 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 03:26:11.971110 | orchestrator | Monday 02 February 2026 03:26:11 +0000 (0:00:01.060) 0:00:25.857 ******* 2026-02-02 03:26:11.971123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379', 'dm-uuid-LVM-2Xx1rXy8ZvvzVeymXUM2Y23jmTeKUn30gyH8a84MHrJn7bcz7phSu8LEA3bm3DqO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:11.971181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a', 'dm-uuid-LVM-nQNI9mGSypmWJN7Kribh0RNL5qLQKFSceYxT4mfzBYfoYiha3ZzoEdYR0rTnnIvK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:11.971195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:11.971208 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:11.971227 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:11.971239 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:11.971250 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:11.971316 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:11.971337 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.070422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.070557 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89', 'dm-uuid-LVM-bGXwDmNnGJLl15xDO66UDgeGoDbpg8C0HvMSdsO6YcSLb4aDqGATNEcOudg8iQom'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.070589 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.070647 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HOxmXw-N5cX-V1Nz-Lu3r-OQk9-N5gG-1syyTi', 'scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4', 'scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.070668 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19', 'dm-uuid-LVM-7fojGdQjjxzlZ1d67G3lfXV0uQvvNrpG74l8TP6AWG5LY1LTlUkEVjmQPc2hTMkL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.070681 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yf6lEa-f3nO-iewk-DEDy-Fb6j-Kq2P-dbkgMf', 'scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc', 'scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.070693 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.070715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6', 'scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.070735 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.477405 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.477505 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.477516 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.477524 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.477547 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.477555 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.477613 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.477629 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.477645 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:12.477655 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AITawh-CkpC-7L3c-Vqqe-GXUP-7eEh-WwcXRH', 'scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5', 'scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.477670 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QbZaLy-yUYT-ccut-PcI7-2pGL-9PmJ-6NoPFr', 'scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28', 'scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.601768 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012', 'scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.601866 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.601953 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f', 'dm-uuid-LVM-oyVS0lpzZeiZxxmfRvad67kbexmRBG5IWJAtRWtNBygZ9yUEjcaaQoSOl1TBvsQs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.601967 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6', 'dm-uuid-LVM-o4NjfQidgd0d8Dt2ERSF2CVjMcc1iNdF2FL70XUBfeOz8qjNKOcDK13w6fcJ9Hta'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.601979 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.602009 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.602072 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:12.602098 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.602110 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.602128 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.602138 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.602149 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.602159 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.602183 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.628570 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.628683 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.628697 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.628706 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.628732 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.628749 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.628759 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qjdzC2-uhmD-TpwQ-o3eu-AERk-xIpn-IuLEqz', 'scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40', 'scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.628768 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.628812 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.628832 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-etyEN7-O4pu-QliJ-NKxv-0HLx-jIcx-JGZ0d7', 'scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b', 'scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.869282 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.869414 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359', 'scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.869463 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.869516 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.869569 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:12.869595 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.869616 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.869635 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.869654 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.869674 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:12.869715 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:13.097624 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:13.097712 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:13.097727 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:13.097798 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:13.097811 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:13.097820 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:13.097829 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:13.097837 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:13.097846 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:13.097854 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:13.097862 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:13.097886 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:13.097992 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:20.576183 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:20.576317 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:20.576385 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:26:20.576402 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:20.576416 | orchestrator | 2026-02-02 03:26:20.576428 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 03:26:20.576441 | orchestrator | Monday 02 February 2026 03:26:13 +0000 (0:00:01.373) 0:00:27.231 ******* 2026-02-02 03:26:20.576452 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:20.576464 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:20.576475 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:20.576486 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:20.576497 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:20.576508 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:20.576519 | orchestrator | 2026-02-02 03:26:20.576551 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 03:26:20.576564 | orchestrator | Monday 02 February 2026 03:26:14 +0000 (0:00:01.002) 0:00:28.233 ******* 2026-02-02 03:26:20.576574 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:20.576585 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:20.576596 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:20.576608 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:20.576618 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:20.576630 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:20.576641 | orchestrator | 2026-02-02 03:26:20.576652 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 03:26:20.576662 | orchestrator | Monday 02 February 2026 03:26:14 +0000 (0:00:00.848) 0:00:29.082 ******* 2026-02-02 03:26:20.576674 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:20.576686 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:20.576698 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:20.576709 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:20.576722 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:20.576732 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:20.576743 | orchestrator | 2026-02-02 03:26:20.576755 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 03:26:20.576768 | orchestrator | Monday 02 February 2026 03:26:15 +0000 (0:00:00.615) 0:00:29.697 ******* 2026-02-02 03:26:20.576780 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:20.576791 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:20.576803 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:20.576814 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:20.576825 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:20.576836 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:20.576847 | orchestrator | 2026-02-02 03:26:20.576859 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 03:26:20.576870 | orchestrator | Monday 02 February 2026 03:26:16 +0000 (0:00:00.883) 0:00:30.581 ******* 2026-02-02 03:26:20.576881 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:20.576928 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:20.576942 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:20.576968 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:20.576980 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:20.576992 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:20.577004 | orchestrator | 2026-02-02 03:26:20.577016 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 03:26:20.577028 | orchestrator | Monday 02 February 2026 03:26:17 +0000 (0:00:00.673) 0:00:31.254 ******* 2026-02-02 03:26:20.577039 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:20.577051 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:20.577063 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:20.577075 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:20.577087 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:20.577098 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:20.577109 | orchestrator | 2026-02-02 03:26:20.577122 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 03:26:20.577133 | orchestrator | Monday 02 February 2026 03:26:17 +0000 (0:00:00.881) 0:00:32.136 ******* 2026-02-02 03:26:20.577145 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-02 03:26:20.577157 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-02 03:26:20.577169 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-02 03:26:20.577180 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-02 03:26:20.577192 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-02 03:26:20.577204 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-02 03:26:20.577215 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-02 03:26:20.577228 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 03:26:20.577236 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-02 03:26:20.577243 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-02 03:26:20.577250 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-02 03:26:20.577257 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-02 03:26:20.577265 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-02 03:26:20.577272 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 03:26:20.577279 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-02 03:26:20.577286 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-02 03:26:20.577293 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-02 03:26:20.577309 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 03:26:20.577317 | orchestrator | 2026-02-02 03:26:20.577324 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 03:26:20.577331 | orchestrator | Monday 02 February 2026 03:26:19 +0000 (0:00:01.770) 0:00:33.906 ******* 2026-02-02 03:26:20.577338 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 03:26:20.577346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 03:26:20.577353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 03:26:20.577361 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:20.577368 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-02 03:26:20.577375 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-02 03:26:20.577382 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-02 03:26:20.577389 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:20.577396 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-02 03:26:20.577403 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-02 03:26:20.577410 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-02 03:26:20.577417 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:20.577425 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 03:26:20.577432 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 03:26:20.577458 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 03:26:37.825780 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:37.825969 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 03:26:37.825991 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 03:26:37.825998 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 03:26:37.826005 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:37.826012 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 03:26:37.826070 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 03:26:37.826077 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 03:26:37.826084 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:37.826091 | orchestrator | 2026-02-02 03:26:37.826099 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 03:26:37.826109 | orchestrator | Monday 02 February 2026 03:26:20 +0000 (0:00:01.095) 0:00:35.002 ******* 2026-02-02 03:26:37.826116 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:37.826122 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:37.826129 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:37.826136 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:26:37.826142 | orchestrator | 2026-02-02 03:26:37.826148 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 03:26:37.826156 | orchestrator | Monday 02 February 2026 03:26:21 +0000 (0:00:01.101) 0:00:36.104 ******* 2026-02-02 03:26:37.826162 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:37.826169 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:37.826175 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:37.826181 | orchestrator | 2026-02-02 03:26:37.826188 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 03:26:37.826194 | orchestrator | Monday 02 February 2026 03:26:22 +0000 (0:00:00.350) 0:00:36.455 ******* 2026-02-02 03:26:37.826201 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:37.826208 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:37.826215 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:37.826222 | orchestrator | 2026-02-02 03:26:37.826229 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 03:26:37.826235 | orchestrator | Monday 02 February 2026 03:26:22 +0000 (0:00:00.355) 0:00:36.810 ******* 2026-02-02 03:26:37.826280 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:37.826287 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:37.826294 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:37.826301 | orchestrator | 2026-02-02 03:26:37.826307 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 03:26:37.826315 | orchestrator | Monday 02 February 2026 03:26:23 +0000 (0:00:00.339) 0:00:37.150 ******* 2026-02-02 03:26:37.826321 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:37.826330 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:37.826337 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:37.826345 | orchestrator | 2026-02-02 03:26:37.826352 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 03:26:37.826360 | orchestrator | Monday 02 February 2026 03:26:23 +0000 (0:00:00.742) 0:00:37.892 ******* 2026-02-02 03:26:37.826368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:26:37.826377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:26:37.826386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:26:37.826394 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:37.826401 | orchestrator | 2026-02-02 03:26:37.826408 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 03:26:37.826439 | orchestrator | Monday 02 February 2026 03:26:24 +0000 (0:00:00.385) 0:00:38.278 ******* 2026-02-02 03:26:37.826445 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:26:37.826453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:26:37.826459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:26:37.826466 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:37.826471 | orchestrator | 2026-02-02 03:26:37.826477 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 03:26:37.826483 | orchestrator | Monday 02 February 2026 03:26:24 +0000 (0:00:00.414) 0:00:38.693 ******* 2026-02-02 03:26:37.826502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:26:37.826508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:26:37.826515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:26:37.826520 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:37.826526 | orchestrator | 2026-02-02 03:26:37.826532 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 03:26:37.826537 | orchestrator | Monday 02 February 2026 03:26:25 +0000 (0:00:00.462) 0:00:39.156 ******* 2026-02-02 03:26:37.826543 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:37.826549 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:37.826555 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:37.826560 | orchestrator | 2026-02-02 03:26:37.826566 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 03:26:37.826573 | orchestrator | Monday 02 February 2026 03:26:25 +0000 (0:00:00.361) 0:00:39.518 ******* 2026-02-02 03:26:37.826579 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 03:26:37.826585 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-02 03:26:37.826591 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-02 03:26:37.826597 | orchestrator | 2026-02-02 03:26:37.826602 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 03:26:37.826608 | orchestrator | Monday 02 February 2026 03:26:26 +0000 (0:00:01.071) 0:00:40.589 ******* 2026-02-02 03:26:37.826614 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 03:26:37.826643 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 03:26:37.826650 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 03:26:37.826655 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-02 03:26:37.826661 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 03:26:37.826667 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 03:26:37.826673 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 03:26:37.826679 | orchestrator | 2026-02-02 03:26:37.826686 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 03:26:37.826692 | orchestrator | Monday 02 February 2026 03:26:27 +0000 (0:00:00.837) 0:00:41.426 ******* 2026-02-02 03:26:37.826698 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 03:26:37.826705 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 03:26:37.826711 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 03:26:37.826717 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-02 03:26:37.826724 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 03:26:37.826729 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 03:26:37.826735 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 03:26:37.826741 | orchestrator | 2026-02-02 03:26:37.826748 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 03:26:37.826760 | orchestrator | Monday 02 February 2026 03:26:29 +0000 (0:00:01.994) 0:00:43.420 ******* 2026-02-02 03:26:37.826766 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:26:37.826771 | orchestrator | 2026-02-02 03:26:37.826775 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 03:26:37.826778 | orchestrator | Monday 02 February 2026 03:26:30 +0000 (0:00:01.350) 0:00:44.771 ******* 2026-02-02 03:26:37.826782 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:26:37.826786 | orchestrator | 2026-02-02 03:26:37.826790 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 03:26:37.826794 | orchestrator | Monday 02 February 2026 03:26:31 +0000 (0:00:01.338) 0:00:46.109 ******* 2026-02-02 03:26:37.826799 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:37.826805 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:37.826810 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:37.826816 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:37.826822 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:37.826828 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:37.826833 | orchestrator | 2026-02-02 03:26:37.826840 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 03:26:37.826846 | orchestrator | Monday 02 February 2026 03:26:33 +0000 (0:00:01.356) 0:00:47.465 ******* 2026-02-02 03:26:37.826851 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:37.826858 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:37.826864 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:37.826870 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:37.826876 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:37.826906 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:37.826911 | orchestrator | 2026-02-02 03:26:37.826917 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 03:26:37.826922 | orchestrator | Monday 02 February 2026 03:26:34 +0000 (0:00:00.683) 0:00:48.149 ******* 2026-02-02 03:26:37.826928 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:37.826934 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:37.826939 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:37.826944 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:37.826949 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:37.826954 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:37.826960 | orchestrator | 2026-02-02 03:26:37.826971 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 03:26:37.826978 | orchestrator | Monday 02 February 2026 03:26:35 +0000 (0:00:01.048) 0:00:49.197 ******* 2026-02-02 03:26:37.826984 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:37.826989 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:37.826995 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:37.827001 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:37.827007 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:37.827013 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:37.827019 | orchestrator | 2026-02-02 03:26:37.827025 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 03:26:37.827031 | orchestrator | Monday 02 February 2026 03:26:35 +0000 (0:00:00.798) 0:00:49.996 ******* 2026-02-02 03:26:37.827037 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:37.827044 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:37.827050 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:37.827057 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:37.827062 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:37.827069 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:37.827075 | orchestrator | 2026-02-02 03:26:37.827081 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 03:26:37.827095 | orchestrator | Monday 02 February 2026 03:26:37 +0000 (0:00:01.297) 0:00:51.293 ******* 2026-02-02 03:26:37.827101 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:37.827107 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:37.827114 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:37.827120 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:37.827135 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:59.468617 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:59.468739 | orchestrator | 2026-02-02 03:26:59.468777 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 03:26:59.468792 | orchestrator | Monday 02 February 2026 03:26:37 +0000 (0:00:00.667) 0:00:51.961 ******* 2026-02-02 03:26:59.468815 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:59.468827 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:59.468838 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:59.468849 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:59.468860 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:59.468897 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:59.468909 | orchestrator | 2026-02-02 03:26:59.468920 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 03:26:59.468931 | orchestrator | Monday 02 February 2026 03:26:38 +0000 (0:00:00.942) 0:00:52.903 ******* 2026-02-02 03:26:59.468943 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:59.468955 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:59.468966 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:59.468977 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:59.468988 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:59.468999 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:59.469010 | orchestrator | 2026-02-02 03:26:59.469021 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 03:26:59.469032 | orchestrator | Monday 02 February 2026 03:26:39 +0000 (0:00:01.155) 0:00:54.059 ******* 2026-02-02 03:26:59.469043 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:59.469054 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:59.469065 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:59.469076 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:59.469087 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:59.469097 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:59.469108 | orchestrator | 2026-02-02 03:26:59.469119 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 03:26:59.469133 | orchestrator | Monday 02 February 2026 03:26:41 +0000 (0:00:01.416) 0:00:55.476 ******* 2026-02-02 03:26:59.469145 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:59.469157 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:59.469169 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:59.469182 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:59.469195 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:59.469208 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:59.469220 | orchestrator | 2026-02-02 03:26:59.469233 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 03:26:59.469246 | orchestrator | Monday 02 February 2026 03:26:42 +0000 (0:00:00.700) 0:00:56.176 ******* 2026-02-02 03:26:59.469258 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:59.469270 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:59.469283 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:59.469296 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:59.469308 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:59.469321 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:59.469333 | orchestrator | 2026-02-02 03:26:59.469345 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 03:26:59.469357 | orchestrator | Monday 02 February 2026 03:26:43 +0000 (0:00:01.079) 0:00:57.256 ******* 2026-02-02 03:26:59.469369 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:59.469382 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:59.469422 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:59.469435 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:59.469448 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:59.469460 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:59.469473 | orchestrator | 2026-02-02 03:26:59.469486 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 03:26:59.469499 | orchestrator | Monday 02 February 2026 03:26:43 +0000 (0:00:00.722) 0:00:57.979 ******* 2026-02-02 03:26:59.469511 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:59.469522 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:59.469532 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:59.469543 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:59.469554 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:59.469564 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:59.469575 | orchestrator | 2026-02-02 03:26:59.469586 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 03:26:59.469597 | orchestrator | Monday 02 February 2026 03:26:44 +0000 (0:00:00.961) 0:00:58.940 ******* 2026-02-02 03:26:59.469608 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:59.469618 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:59.469629 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:59.469640 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:59.469650 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:59.469676 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:59.469688 | orchestrator | 2026-02-02 03:26:59.469699 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 03:26:59.469709 | orchestrator | Monday 02 February 2026 03:26:45 +0000 (0:00:00.688) 0:00:59.629 ******* 2026-02-02 03:26:59.469720 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:59.469731 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:59.469742 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:59.469753 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:59.469763 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:59.469774 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:59.469784 | orchestrator | 2026-02-02 03:26:59.469795 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 03:26:59.469806 | orchestrator | Monday 02 February 2026 03:26:46 +0000 (0:00:00.941) 0:01:00.570 ******* 2026-02-02 03:26:59.469817 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:59.469828 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:59.469839 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:59.469849 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:59.469860 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:59.469908 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:59.469919 | orchestrator | 2026-02-02 03:26:59.469930 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 03:26:59.469941 | orchestrator | Monday 02 February 2026 03:26:47 +0000 (0:00:00.698) 0:01:01.269 ******* 2026-02-02 03:26:59.469952 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:59.469963 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:59.469974 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:59.470004 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:59.470082 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:59.470095 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:59.470106 | orchestrator | 2026-02-02 03:26:59.470117 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 03:26:59.470128 | orchestrator | Monday 02 February 2026 03:26:48 +0000 (0:00:00.946) 0:01:02.215 ******* 2026-02-02 03:26:59.470139 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:59.470150 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:59.470160 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:59.470171 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:59.470182 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:59.470192 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:59.470214 | orchestrator | 2026-02-02 03:26:59.470225 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 03:26:59.470236 | orchestrator | Monday 02 February 2026 03:26:48 +0000 (0:00:00.684) 0:01:02.900 ******* 2026-02-02 03:26:59.470247 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:26:59.470258 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:26:59.470269 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:26:59.470279 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:26:59.470290 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:26:59.470301 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:26:59.470312 | orchestrator | 2026-02-02 03:26:59.470323 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 03:26:59.470334 | orchestrator | Monday 02 February 2026 03:26:50 +0000 (0:00:01.472) 0:01:04.372 ******* 2026-02-02 03:26:59.470345 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:26:59.470356 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:26:59.470367 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:26:59.470378 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:26:59.470389 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:26:59.470399 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:26:59.470410 | orchestrator | 2026-02-02 03:26:59.470421 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 03:26:59.470432 | orchestrator | Monday 02 February 2026 03:26:52 +0000 (0:00:01.771) 0:01:06.144 ******* 2026-02-02 03:26:59.470443 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:26:59.470453 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:26:59.470464 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:26:59.470475 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:26:59.470485 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:26:59.470496 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:26:59.470507 | orchestrator | 2026-02-02 03:26:59.470518 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 03:26:59.470528 | orchestrator | Monday 02 February 2026 03:26:54 +0000 (0:00:02.011) 0:01:08.155 ******* 2026-02-02 03:26:59.470541 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:26:59.470554 | orchestrator | 2026-02-02 03:26:59.470574 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 03:26:59.470591 | orchestrator | Monday 02 February 2026 03:26:55 +0000 (0:00:01.510) 0:01:09.666 ******* 2026-02-02 03:26:59.470611 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:59.470631 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:59.470649 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:59.470668 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:59.470679 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:59.470690 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:59.470701 | orchestrator | 2026-02-02 03:26:59.470712 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 03:26:59.470723 | orchestrator | Monday 02 February 2026 03:26:56 +0000 (0:00:00.679) 0:01:10.346 ******* 2026-02-02 03:26:59.470734 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:26:59.470745 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:26:59.470755 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:26:59.470766 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:26:59.470777 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:26:59.470788 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:26:59.470798 | orchestrator | 2026-02-02 03:26:59.470809 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 03:26:59.470820 | orchestrator | Monday 02 February 2026 03:26:57 +0000 (0:00:00.839) 0:01:11.185 ******* 2026-02-02 03:26:59.470831 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 03:26:59.470849 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 03:26:59.470893 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 03:26:59.470905 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 03:26:59.470916 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 03:26:59.470927 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 03:26:59.470939 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 03:26:59.470950 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 03:26:59.470961 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 03:26:59.470972 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 03:26:59.470982 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 03:26:59.470993 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 03:26:59.471004 | orchestrator | 2026-02-02 03:26:59.471015 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 03:26:59.471026 | orchestrator | Monday 02 February 2026 03:26:58 +0000 (0:00:01.271) 0:01:12.457 ******* 2026-02-02 03:26:59.471046 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:28:16.330998 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:28:16.331116 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:28:16.331132 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:28:16.331144 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:28:16.331155 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:28:16.331167 | orchestrator | 2026-02-02 03:28:16.331179 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 03:28:16.331193 | orchestrator | Monday 02 February 2026 03:26:59 +0000 (0:00:01.143) 0:01:13.600 ******* 2026-02-02 03:28:16.331204 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:16.331215 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:16.331226 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:16.331237 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:16.331248 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:16.331259 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:16.331270 | orchestrator | 2026-02-02 03:28:16.331282 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 03:28:16.331293 | orchestrator | Monday 02 February 2026 03:27:00 +0000 (0:00:00.660) 0:01:14.261 ******* 2026-02-02 03:28:16.331304 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:16.331315 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:16.331326 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:16.331337 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:16.331348 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:16.331358 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:16.331369 | orchestrator | 2026-02-02 03:28:16.331380 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 03:28:16.331392 | orchestrator | Monday 02 February 2026 03:27:00 +0000 (0:00:00.846) 0:01:15.108 ******* 2026-02-02 03:28:16.331404 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:16.331415 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:16.331426 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:16.331437 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:16.331448 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:16.331459 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:16.331469 | orchestrator | 2026-02-02 03:28:16.331481 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 03:28:16.331492 | orchestrator | Monday 02 February 2026 03:27:01 +0000 (0:00:00.609) 0:01:15.717 ******* 2026-02-02 03:28:16.331527 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:28:16.331542 | orchestrator | 2026-02-02 03:28:16.331556 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 03:28:16.331568 | orchestrator | Monday 02 February 2026 03:27:02 +0000 (0:00:01.331) 0:01:17.049 ******* 2026-02-02 03:28:16.331581 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:28:16.331594 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:28:16.331607 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:28:16.331620 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:28:16.331633 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:28:16.331645 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:28:16.331657 | orchestrator | 2026-02-02 03:28:16.331670 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 03:28:16.331684 | orchestrator | Monday 02 February 2026 03:28:03 +0000 (0:01:00.676) 0:02:17.726 ******* 2026-02-02 03:28:16.331697 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 03:28:16.331710 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 03:28:16.331723 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 03:28:16.331735 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:16.331748 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 03:28:16.331760 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 03:28:16.331773 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 03:28:16.331785 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:16.331798 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 03:28:16.331838 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 03:28:16.331867 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 03:28:16.331881 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:16.331892 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 03:28:16.331903 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 03:28:16.331914 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 03:28:16.331925 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:16.331936 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 03:28:16.331946 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 03:28:16.331957 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 03:28:16.331968 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:16.331979 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 03:28:16.331990 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 03:28:16.332001 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 03:28:16.332012 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:16.332022 | orchestrator | 2026-02-02 03:28:16.332034 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 03:28:16.332063 | orchestrator | Monday 02 February 2026 03:28:04 +0000 (0:00:00.811) 0:02:18.537 ******* 2026-02-02 03:28:16.332074 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:16.332085 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:16.332097 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:16.332108 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:16.332119 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:16.332138 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:16.332149 | orchestrator | 2026-02-02 03:28:16.332161 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 03:28:16.332171 | orchestrator | Monday 02 February 2026 03:28:05 +0000 (0:00:00.861) 0:02:19.399 ******* 2026-02-02 03:28:16.332182 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:16.332193 | orchestrator | 2026-02-02 03:28:16.332204 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 03:28:16.332215 | orchestrator | Monday 02 February 2026 03:28:05 +0000 (0:00:00.173) 0:02:19.573 ******* 2026-02-02 03:28:16.332226 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:16.332237 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:16.332248 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:16.332258 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:16.332269 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:16.332280 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:16.332291 | orchestrator | 2026-02-02 03:28:16.332301 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 03:28:16.332313 | orchestrator | Monday 02 February 2026 03:28:06 +0000 (0:00:00.691) 0:02:20.264 ******* 2026-02-02 03:28:16.332323 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:16.332334 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:16.332345 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:16.332356 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:16.332367 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:16.332377 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:16.332388 | orchestrator | 2026-02-02 03:28:16.332399 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 03:28:16.332410 | orchestrator | Monday 02 February 2026 03:28:07 +0000 (0:00:00.922) 0:02:21.187 ******* 2026-02-02 03:28:16.332421 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:16.332432 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:16.332443 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:16.332454 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:16.332464 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:16.332475 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:16.332486 | orchestrator | 2026-02-02 03:28:16.332497 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 03:28:16.332508 | orchestrator | Monday 02 February 2026 03:28:07 +0000 (0:00:00.703) 0:02:21.890 ******* 2026-02-02 03:28:16.332519 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:28:16.332530 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:28:16.332541 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:28:16.332552 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:28:16.332562 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:28:16.332573 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:28:16.332584 | orchestrator | 2026-02-02 03:28:16.332595 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 03:28:16.332606 | orchestrator | Monday 02 February 2026 03:28:11 +0000 (0:00:03.556) 0:02:25.447 ******* 2026-02-02 03:28:16.332617 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:28:16.332628 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:28:16.332639 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:28:16.332649 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:28:16.332660 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:28:16.332671 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:28:16.332681 | orchestrator | 2026-02-02 03:28:16.332692 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 03:28:16.332703 | orchestrator | Monday 02 February 2026 03:28:11 +0000 (0:00:00.677) 0:02:26.125 ******* 2026-02-02 03:28:16.332715 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:28:16.332728 | orchestrator | 2026-02-02 03:28:16.332739 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 03:28:16.332758 | orchestrator | Monday 02 February 2026 03:28:13 +0000 (0:00:01.411) 0:02:27.536 ******* 2026-02-02 03:28:16.332769 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:16.332780 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:16.332791 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:16.332802 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:16.332842 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:16.332854 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:16.332865 | orchestrator | 2026-02-02 03:28:16.332875 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 03:28:16.332886 | orchestrator | Monday 02 February 2026 03:28:14 +0000 (0:00:00.878) 0:02:28.415 ******* 2026-02-02 03:28:16.332897 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:16.332908 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:16.332919 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:16.332930 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:16.332940 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:16.332951 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:16.332962 | orchestrator | 2026-02-02 03:28:16.332973 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 03:28:16.332984 | orchestrator | Monday 02 February 2026 03:28:14 +0000 (0:00:00.714) 0:02:29.129 ******* 2026-02-02 03:28:16.332995 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:16.333006 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:16.333016 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:16.333027 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:16.333038 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:16.333049 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:16.333060 | orchestrator | 2026-02-02 03:28:16.333071 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 03:28:16.333081 | orchestrator | Monday 02 February 2026 03:28:15 +0000 (0:00:00.907) 0:02:30.037 ******* 2026-02-02 03:28:16.333093 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:16.333104 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:16.333121 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:28.938925 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:28.939028 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:28.939040 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:28.939049 | orchestrator | 2026-02-02 03:28:28.939058 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 03:28:28.939068 | orchestrator | Monday 02 February 2026 03:28:16 +0000 (0:00:00.663) 0:02:30.700 ******* 2026-02-02 03:28:28.939075 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:28.939083 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:28.939090 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:28.939098 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:28.939105 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:28.939112 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:28.939119 | orchestrator | 2026-02-02 03:28:28.939126 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 03:28:28.939134 | orchestrator | Monday 02 February 2026 03:28:17 +0000 (0:00:00.946) 0:02:31.647 ******* 2026-02-02 03:28:28.939141 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:28.939148 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:28.939155 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:28.939162 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:28.939169 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:28.939177 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:28.939184 | orchestrator | 2026-02-02 03:28:28.939191 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 03:28:28.939199 | orchestrator | Monday 02 February 2026 03:28:18 +0000 (0:00:00.647) 0:02:32.295 ******* 2026-02-02 03:28:28.939227 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:28.939235 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:28.939243 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:28.939250 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:28.939257 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:28.939264 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:28.939272 | orchestrator | 2026-02-02 03:28:28.939279 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 03:28:28.939286 | orchestrator | Monday 02 February 2026 03:28:19 +0000 (0:00:00.954) 0:02:33.249 ******* 2026-02-02 03:28:28.939293 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:28.939300 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:28.939307 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:28.939314 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:28.939321 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:28.939328 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:28.939335 | orchestrator | 2026-02-02 03:28:28.939342 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 03:28:28.939350 | orchestrator | Monday 02 February 2026 03:28:19 +0000 (0:00:00.639) 0:02:33.889 ******* 2026-02-02 03:28:28.939357 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:28:28.939376 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:28:28.939384 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:28:28.939391 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:28:28.939398 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:28:28.939406 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:28:28.939413 | orchestrator | 2026-02-02 03:28:28.939420 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 03:28:28.939427 | orchestrator | Monday 02 February 2026 03:28:21 +0000 (0:00:01.401) 0:02:35.290 ******* 2026-02-02 03:28:28.939435 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:28:28.939444 | orchestrator | 2026-02-02 03:28:28.939452 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 03:28:28.939459 | orchestrator | Monday 02 February 2026 03:28:22 +0000 (0:00:01.361) 0:02:36.651 ******* 2026-02-02 03:28:28.939466 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-02 03:28:28.939474 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-02 03:28:28.939481 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-02 03:28:28.939488 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-02 03:28:28.939496 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-02 03:28:28.939503 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-02 03:28:28.939510 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-02 03:28:28.939530 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-02 03:28:28.939538 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-02 03:28:28.939545 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-02 03:28:28.939552 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-02 03:28:28.939559 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-02 03:28:28.939567 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-02 03:28:28.939574 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-02 03:28:28.939581 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-02 03:28:28.939589 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-02 03:28:28.939596 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-02 03:28:28.939603 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-02 03:28:28.939610 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-02 03:28:28.939624 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-02 03:28:28.939631 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-02 03:28:28.939638 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-02 03:28:28.939645 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-02 03:28:28.939653 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-02 03:28:28.939674 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-02 03:28:28.939682 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-02 03:28:28.939689 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-02 03:28:28.939696 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-02 03:28:28.939703 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-02 03:28:28.939711 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-02 03:28:28.939718 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-02 03:28:28.939725 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-02 03:28:28.939732 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-02 03:28:28.939739 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-02 03:28:28.939747 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-02 03:28:28.939754 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-02 03:28:28.939761 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-02 03:28:28.939768 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-02 03:28:28.939775 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-02 03:28:28.939782 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-02 03:28:28.939790 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-02 03:28:28.939820 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 03:28:28.939833 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-02 03:28:28.939846 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-02 03:28:28.939858 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-02 03:28:28.939870 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-02 03:28:28.939881 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 03:28:28.939888 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 03:28:28.939895 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 03:28:28.939903 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-02 03:28:28.939910 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-02 03:28:28.939917 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-02 03:28:28.939924 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 03:28:28.939932 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 03:28:28.939939 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 03:28:28.939946 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 03:28:28.939953 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 03:28:28.939961 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 03:28:28.939968 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 03:28:28.939975 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 03:28:28.939983 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 03:28:28.939996 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 03:28:28.940003 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 03:28:28.940010 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 03:28:28.940017 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 03:28:28.940025 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 03:28:28.940032 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 03:28:28.940044 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 03:28:28.940052 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 03:28:28.940059 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 03:28:28.940066 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 03:28:28.940074 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 03:28:28.940081 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 03:28:28.940088 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 03:28:28.940095 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 03:28:28.940102 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 03:28:28.940115 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 03:28:28.940127 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-02 03:28:28.940139 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 03:28:28.940150 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 03:28:28.940162 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 03:28:28.940172 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 03:28:28.940192 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-02 03:28:44.579993 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-02 03:28:44.580117 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 03:28:44.580136 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-02 03:28:44.580162 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 03:28:44.580176 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 03:28:44.580189 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-02 03:28:44.580202 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-02 03:28:44.580215 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-02 03:28:44.580228 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-02 03:28:44.580240 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-02 03:28:44.580253 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-02 03:28:44.580266 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-02 03:28:44.580279 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-02 03:28:44.580292 | orchestrator | 2026-02-02 03:28:44.580306 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 03:28:44.580320 | orchestrator | Monday 02 February 2026 03:28:28 +0000 (0:00:06.367) 0:02:43.019 ******* 2026-02-02 03:28:44.580334 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.580349 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.580363 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.580377 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:28:44.580423 | orchestrator | 2026-02-02 03:28:44.580437 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-02 03:28:44.580450 | orchestrator | Monday 02 February 2026 03:28:30 +0000 (0:00:01.238) 0:02:44.258 ******* 2026-02-02 03:28:44.580464 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 03:28:44.580479 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 03:28:44.580491 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 03:28:44.580504 | orchestrator | 2026-02-02 03:28:44.580518 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-02 03:28:44.580532 | orchestrator | Monday 02 February 2026 03:28:30 +0000 (0:00:00.717) 0:02:44.975 ******* 2026-02-02 03:28:44.580547 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 03:28:44.580561 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 03:28:44.580575 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 03:28:44.580588 | orchestrator | 2026-02-02 03:28:44.580601 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 03:28:44.580619 | orchestrator | Monday 02 February 2026 03:28:32 +0000 (0:00:01.174) 0:02:46.149 ******* 2026-02-02 03:28:44.580636 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:28:44.580649 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:28:44.580661 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:28:44.580675 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.580688 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.580701 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.580713 | orchestrator | 2026-02-02 03:28:44.580726 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 03:28:44.580852 | orchestrator | Monday 02 February 2026 03:28:32 +0000 (0:00:00.893) 0:02:47.043 ******* 2026-02-02 03:28:44.580868 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:28:44.580878 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:28:44.580887 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:28:44.580897 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.580905 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.580913 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.580921 | orchestrator | 2026-02-02 03:28:44.580929 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 03:28:44.580937 | orchestrator | Monday 02 February 2026 03:28:33 +0000 (0:00:00.699) 0:02:47.743 ******* 2026-02-02 03:28:44.580945 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:44.580953 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:44.580961 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:44.580970 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.580978 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.580986 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.580995 | orchestrator | 2026-02-02 03:28:44.581003 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 03:28:44.581011 | orchestrator | Monday 02 February 2026 03:28:34 +0000 (0:00:01.003) 0:02:48.746 ******* 2026-02-02 03:28:44.581019 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:44.581027 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:44.581035 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:44.581043 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.581051 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.581058 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.581077 | orchestrator | 2026-02-02 03:28:44.581086 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 03:28:44.581115 | orchestrator | Monday 02 February 2026 03:28:35 +0000 (0:00:00.759) 0:02:49.506 ******* 2026-02-02 03:28:44.581129 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:44.581142 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:44.581154 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:44.581166 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.581177 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.581190 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.581203 | orchestrator | 2026-02-02 03:28:44.581216 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 03:28:44.581230 | orchestrator | Monday 02 February 2026 03:28:36 +0000 (0:00:00.948) 0:02:50.455 ******* 2026-02-02 03:28:44.581243 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:44.581256 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:44.581269 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:44.581282 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.581296 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.581308 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.581323 | orchestrator | 2026-02-02 03:28:44.581337 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 03:28:44.581351 | orchestrator | Monday 02 February 2026 03:28:36 +0000 (0:00:00.637) 0:02:51.093 ******* 2026-02-02 03:28:44.581364 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:44.581402 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:44.581422 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:44.581436 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.581448 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.581460 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.581472 | orchestrator | 2026-02-02 03:28:44.581485 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 03:28:44.581506 | orchestrator | Monday 02 February 2026 03:28:37 +0000 (0:00:00.893) 0:02:51.986 ******* 2026-02-02 03:28:44.581520 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:44.581533 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:44.581546 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:44.581559 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.581578 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.581591 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.581604 | orchestrator | 2026-02-02 03:28:44.581616 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 03:28:44.581630 | orchestrator | Monday 02 February 2026 03:28:38 +0000 (0:00:00.638) 0:02:52.624 ******* 2026-02-02 03:28:44.581644 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.581658 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.581670 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.581682 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:28:44.581696 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:28:44.581708 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:28:44.581721 | orchestrator | 2026-02-02 03:28:44.581735 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 03:28:44.581749 | orchestrator | Monday 02 February 2026 03:28:41 +0000 (0:00:02.764) 0:02:55.389 ******* 2026-02-02 03:28:44.581762 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:28:44.581776 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:28:44.581839 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:28:44.581848 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.581856 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.581864 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.581872 | orchestrator | 2026-02-02 03:28:44.581880 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 03:28:44.581900 | orchestrator | Monday 02 February 2026 03:28:41 +0000 (0:00:00.638) 0:02:56.027 ******* 2026-02-02 03:28:44.581908 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:28:44.581916 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:28:44.581923 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:28:44.581931 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.581939 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.581947 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.581955 | orchestrator | 2026-02-02 03:28:44.581963 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 03:28:44.581971 | orchestrator | Monday 02 February 2026 03:28:42 +0000 (0:00:00.979) 0:02:57.007 ******* 2026-02-02 03:28:44.581978 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:28:44.581986 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:28:44.582002 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:28:44.582010 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.582080 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.582088 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.582096 | orchestrator | 2026-02-02 03:28:44.582104 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 03:28:44.582113 | orchestrator | Monday 02 February 2026 03:28:43 +0000 (0:00:00.669) 0:02:57.677 ******* 2026-02-02 03:28:44.582121 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 03:28:44.582161 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 03:28:44.582170 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 03:28:44.582178 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:28:44.582186 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:28:44.582194 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:28:44.582202 | orchestrator | 2026-02-02 03:28:44.582210 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 03:28:44.582218 | orchestrator | Monday 02 February 2026 03:28:44 +0000 (0:00:00.915) 0:02:58.593 ******* 2026-02-02 03:28:44.582242 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-02 03:29:01.622892 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-02 03:29:01.623019 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:01.623040 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-02 03:29:01.623054 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-02 03:29:01.623062 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-02 03:29:01.623091 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-02 03:29:01.623098 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:29:01.623105 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:29:01.623112 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:01.623119 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:01.623125 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:01.623132 | orchestrator | 2026-02-02 03:29:01.623140 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 03:29:01.623149 | orchestrator | Monday 02 February 2026 03:28:45 +0000 (0:00:00.735) 0:02:59.329 ******* 2026-02-02 03:29:01.623155 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:01.623162 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:29:01.623169 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:29:01.623176 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:01.623182 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:01.623189 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:01.623196 | orchestrator | 2026-02-02 03:29:01.623203 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 03:29:01.623209 | orchestrator | Monday 02 February 2026 03:28:46 +0000 (0:00:00.975) 0:03:00.305 ******* 2026-02-02 03:29:01.623216 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:01.623223 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:29:01.623230 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:29:01.623236 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:01.623243 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:01.623323 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:01.623330 | orchestrator | 2026-02-02 03:29:01.623338 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 03:29:01.623347 | orchestrator | Monday 02 February 2026 03:28:46 +0000 (0:00:00.629) 0:03:00.935 ******* 2026-02-02 03:29:01.623368 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:01.623375 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:29:01.623382 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:29:01.623388 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:01.623395 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:01.623403 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:01.623411 | orchestrator | 2026-02-02 03:29:01.623420 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 03:29:01.623428 | orchestrator | Monday 02 February 2026 03:28:47 +0000 (0:00:01.005) 0:03:01.940 ******* 2026-02-02 03:29:01.623436 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:01.623444 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:29:01.623452 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:29:01.623459 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:01.623467 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:01.623475 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:01.623483 | orchestrator | 2026-02-02 03:29:01.623491 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 03:29:01.623499 | orchestrator | Monday 02 February 2026 03:28:48 +0000 (0:00:00.673) 0:03:02.614 ******* 2026-02-02 03:29:01.623507 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:01.623514 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:29:01.623522 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:29:01.623530 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:01.623538 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:01.623546 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:01.623562 | orchestrator | 2026-02-02 03:29:01.623570 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 03:29:01.623578 | orchestrator | Monday 02 February 2026 03:28:49 +0000 (0:00:01.008) 0:03:03.622 ******* 2026-02-02 03:29:01.623586 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:29:01.623595 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:29:01.623603 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:29:01.623626 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:01.623635 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:01.623643 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:01.623650 | orchestrator | 2026-02-02 03:29:01.623658 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 03:29:01.623666 | orchestrator | Monday 02 February 2026 03:28:50 +0000 (0:00:00.948) 0:03:04.571 ******* 2026-02-02 03:29:01.623674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:29:01.623682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:29:01.623690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:29:01.623698 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:01.623706 | orchestrator | 2026-02-02 03:29:01.623714 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 03:29:01.623723 | orchestrator | Monday 02 February 2026 03:28:50 +0000 (0:00:00.438) 0:03:05.009 ******* 2026-02-02 03:29:01.623730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:29:01.623743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:29:01.623754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:29:01.623772 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:01.623806 | orchestrator | 2026-02-02 03:29:01.623817 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 03:29:01.623827 | orchestrator | Monday 02 February 2026 03:28:51 +0000 (0:00:00.439) 0:03:05.448 ******* 2026-02-02 03:29:01.623837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:29:01.623848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:29:01.623858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:29:01.623868 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:01.623878 | orchestrator | 2026-02-02 03:29:01.623889 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 03:29:01.623899 | orchestrator | Monday 02 February 2026 03:28:51 +0000 (0:00:00.459) 0:03:05.907 ******* 2026-02-02 03:29:01.623911 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:29:01.623922 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:29:01.623934 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:29:01.623944 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:01.623955 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:01.623963 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:01.623969 | orchestrator | 2026-02-02 03:29:01.623976 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 03:29:01.623983 | orchestrator | Monday 02 February 2026 03:28:52 +0000 (0:00:00.701) 0:03:06.609 ******* 2026-02-02 03:29:01.623990 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 03:29:01.623997 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-02 03:29:01.624004 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-02 03:29:01.624011 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-02 03:29:01.624018 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:01.624024 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-02 03:29:01.624031 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:01.624038 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-02 03:29:01.624045 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:01.624051 | orchestrator | 2026-02-02 03:29:01.624058 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 03:29:01.624072 | orchestrator | Monday 02 February 2026 03:28:54 +0000 (0:00:01.809) 0:03:08.418 ******* 2026-02-02 03:29:01.624079 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:29:01.624086 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:29:01.624092 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:29:01.624099 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:29:01.624105 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:29:01.624112 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:29:01.624118 | orchestrator | 2026-02-02 03:29:01.624125 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 03:29:01.624132 | orchestrator | Monday 02 February 2026 03:28:56 +0000 (0:00:02.652) 0:03:11.071 ******* 2026-02-02 03:29:01.624139 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:29:01.624151 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:29:01.624158 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:29:01.624164 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:29:01.624171 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:29:01.624177 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:29:01.624184 | orchestrator | 2026-02-02 03:29:01.624191 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-02 03:29:01.624198 | orchestrator | Monday 02 February 2026 03:28:57 +0000 (0:00:01.011) 0:03:12.082 ******* 2026-02-02 03:29:01.624205 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:01.624211 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:29:01.624218 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:29:01.624225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:29:01.624233 | orchestrator | 2026-02-02 03:29:01.624239 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-02 03:29:01.624246 | orchestrator | Monday 02 February 2026 03:28:59 +0000 (0:00:01.153) 0:03:13.236 ******* 2026-02-02 03:29:01.624253 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:01.624259 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:01.624266 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:01.624273 | orchestrator | 2026-02-02 03:29:01.624279 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-02 03:29:01.624286 | orchestrator | Monday 02 February 2026 03:28:59 +0000 (0:00:00.352) 0:03:13.589 ******* 2026-02-02 03:29:01.624292 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:29:01.624299 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:29:01.624306 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:29:01.624312 | orchestrator | 2026-02-02 03:29:01.624319 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-02 03:29:01.624326 | orchestrator | Monday 02 February 2026 03:29:00 +0000 (0:00:01.490) 0:03:15.079 ******* 2026-02-02 03:29:01.624339 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 03:29:17.909041 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 03:29:17.909161 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 03:29:17.909178 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:17.909192 | orchestrator | 2026-02-02 03:29:17.909206 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-02 03:29:17.909220 | orchestrator | Monday 02 February 2026 03:29:01 +0000 (0:00:00.672) 0:03:15.752 ******* 2026-02-02 03:29:17.909232 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:17.909246 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:17.909259 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:17.909271 | orchestrator | 2026-02-02 03:29:17.909283 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-02 03:29:17.909296 | orchestrator | Monday 02 February 2026 03:29:01 +0000 (0:00:00.377) 0:03:16.130 ******* 2026-02-02 03:29:17.909309 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:17.909322 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:17.909334 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:17.909371 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:29:17.909385 | orchestrator | 2026-02-02 03:29:17.909397 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-02 03:29:17.909410 | orchestrator | Monday 02 February 2026 03:29:03 +0000 (0:00:01.150) 0:03:17.280 ******* 2026-02-02 03:29:17.909422 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:29:17.909435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:29:17.909444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:29:17.909451 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.909458 | orchestrator | 2026-02-02 03:29:17.909465 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-02 03:29:17.909473 | orchestrator | Monday 02 February 2026 03:29:03 +0000 (0:00:00.429) 0:03:17.709 ******* 2026-02-02 03:29:17.909480 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.909487 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:29:17.909494 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:29:17.909502 | orchestrator | 2026-02-02 03:29:17.909509 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-02 03:29:17.909516 | orchestrator | Monday 02 February 2026 03:29:03 +0000 (0:00:00.363) 0:03:18.073 ******* 2026-02-02 03:29:17.909523 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.909531 | orchestrator | 2026-02-02 03:29:17.909538 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-02 03:29:17.909545 | orchestrator | Monday 02 February 2026 03:29:04 +0000 (0:00:00.276) 0:03:18.349 ******* 2026-02-02 03:29:17.909553 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.909560 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:29:17.909567 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:29:17.909576 | orchestrator | 2026-02-02 03:29:17.909584 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-02 03:29:17.909593 | orchestrator | Monday 02 February 2026 03:29:04 +0000 (0:00:00.331) 0:03:18.681 ******* 2026-02-02 03:29:17.909601 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.909609 | orchestrator | 2026-02-02 03:29:17.909618 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-02 03:29:17.909626 | orchestrator | Monday 02 February 2026 03:29:05 +0000 (0:00:00.732) 0:03:19.413 ******* 2026-02-02 03:29:17.909635 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.909643 | orchestrator | 2026-02-02 03:29:17.909651 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-02 03:29:17.909660 | orchestrator | Monday 02 February 2026 03:29:05 +0000 (0:00:00.306) 0:03:19.720 ******* 2026-02-02 03:29:17.909668 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.909676 | orchestrator | 2026-02-02 03:29:17.909685 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-02 03:29:17.909693 | orchestrator | Monday 02 February 2026 03:29:05 +0000 (0:00:00.133) 0:03:19.853 ******* 2026-02-02 03:29:17.909715 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.909724 | orchestrator | 2026-02-02 03:29:17.909732 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-02 03:29:17.909741 | orchestrator | Monday 02 February 2026 03:29:05 +0000 (0:00:00.226) 0:03:20.080 ******* 2026-02-02 03:29:17.909749 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.909758 | orchestrator | 2026-02-02 03:29:17.909837 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-02 03:29:17.909847 | orchestrator | Monday 02 February 2026 03:29:06 +0000 (0:00:00.245) 0:03:20.326 ******* 2026-02-02 03:29:17.909855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:29:17.909863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:29:17.909872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:29:17.909888 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.909896 | orchestrator | 2026-02-02 03:29:17.909904 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-02 03:29:17.909911 | orchestrator | Monday 02 February 2026 03:29:06 +0000 (0:00:00.450) 0:03:20.777 ******* 2026-02-02 03:29:17.909918 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.909926 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:29:17.909933 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:29:17.909940 | orchestrator | 2026-02-02 03:29:17.909947 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-02 03:29:17.909955 | orchestrator | Monday 02 February 2026 03:29:06 +0000 (0:00:00.352) 0:03:21.129 ******* 2026-02-02 03:29:17.909962 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.909969 | orchestrator | 2026-02-02 03:29:17.909976 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-02 03:29:17.909984 | orchestrator | Monday 02 February 2026 03:29:07 +0000 (0:00:00.229) 0:03:21.359 ******* 2026-02-02 03:29:17.909991 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.909998 | orchestrator | 2026-02-02 03:29:17.910068 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-02 03:29:17.910078 | orchestrator | Monday 02 February 2026 03:29:07 +0000 (0:00:00.255) 0:03:21.614 ******* 2026-02-02 03:29:17.910085 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:17.910092 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:17.910100 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:17.910107 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:29:17.910114 | orchestrator | 2026-02-02 03:29:17.910122 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-02 03:29:17.910129 | orchestrator | Monday 02 February 2026 03:29:08 +0000 (0:00:01.207) 0:03:22.822 ******* 2026-02-02 03:29:17.910136 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:29:17.910143 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:29:17.910150 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:29:17.910158 | orchestrator | 2026-02-02 03:29:17.910165 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-02 03:29:17.910172 | orchestrator | Monday 02 February 2026 03:29:09 +0000 (0:00:00.355) 0:03:23.178 ******* 2026-02-02 03:29:17.910179 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:29:17.910187 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:29:17.910194 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:29:17.910201 | orchestrator | 2026-02-02 03:29:17.910208 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-02 03:29:17.910233 | orchestrator | Monday 02 February 2026 03:29:10 +0000 (0:00:01.530) 0:03:24.708 ******* 2026-02-02 03:29:17.910241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:29:17.910248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:29:17.910256 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:29:17.910263 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.910270 | orchestrator | 2026-02-02 03:29:17.910277 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-02 03:29:17.910284 | orchestrator | Monday 02 February 2026 03:29:11 +0000 (0:00:00.665) 0:03:25.374 ******* 2026-02-02 03:29:17.910292 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:29:17.910299 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:29:17.910306 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:29:17.910313 | orchestrator | 2026-02-02 03:29:17.910320 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-02 03:29:17.910327 | orchestrator | Monday 02 February 2026 03:29:11 +0000 (0:00:00.367) 0:03:25.741 ******* 2026-02-02 03:29:17.910335 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:17.910342 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:17.910349 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:17.910362 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:29:17.910369 | orchestrator | 2026-02-02 03:29:17.910377 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-02 03:29:17.910384 | orchestrator | Monday 02 February 2026 03:29:12 +0000 (0:00:01.126) 0:03:26.868 ******* 2026-02-02 03:29:17.910391 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:29:17.910399 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:29:17.910406 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:29:17.910413 | orchestrator | 2026-02-02 03:29:17.910421 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-02 03:29:17.910428 | orchestrator | Monday 02 February 2026 03:29:13 +0000 (0:00:00.393) 0:03:27.261 ******* 2026-02-02 03:29:17.910435 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:29:17.910443 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:29:17.910450 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:29:17.910457 | orchestrator | 2026-02-02 03:29:17.910465 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-02 03:29:17.910472 | orchestrator | Monday 02 February 2026 03:29:14 +0000 (0:00:01.185) 0:03:28.447 ******* 2026-02-02 03:29:17.910479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:29:17.910486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:29:17.910498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:29:17.910506 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.910513 | orchestrator | 2026-02-02 03:29:17.910520 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-02 03:29:17.910528 | orchestrator | Monday 02 February 2026 03:29:15 +0000 (0:00:00.952) 0:03:29.399 ******* 2026-02-02 03:29:17.910535 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:29:17.910542 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:29:17.910549 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:29:17.910557 | orchestrator | 2026-02-02 03:29:17.910564 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-02 03:29:17.910571 | orchestrator | Monday 02 February 2026 03:29:15 +0000 (0:00:00.594) 0:03:29.994 ******* 2026-02-02 03:29:17.910578 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.910586 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:29:17.910593 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:29:17.910600 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:17.910607 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:17.910614 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:17.910621 | orchestrator | 2026-02-02 03:29:17.910628 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-02 03:29:17.910636 | orchestrator | Monday 02 February 2026 03:29:16 +0000 (0:00:00.682) 0:03:30.677 ******* 2026-02-02 03:29:17.910643 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:29:17.910650 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:29:17.910657 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:29:17.910665 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:29:17.910672 | orchestrator | 2026-02-02 03:29:17.910679 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-02 03:29:17.910687 | orchestrator | Monday 02 February 2026 03:29:17 +0000 (0:00:01.148) 0:03:31.825 ******* 2026-02-02 03:29:17.910700 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:35.467208 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:35.467307 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:35.467320 | orchestrator | 2026-02-02 03:29:35.467331 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-02 03:29:35.467342 | orchestrator | Monday 02 February 2026 03:29:18 +0000 (0:00:00.381) 0:03:32.206 ******* 2026-02-02 03:29:35.467351 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:29:35.467386 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:29:35.467397 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:29:35.467406 | orchestrator | 2026-02-02 03:29:35.467416 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-02 03:29:35.467426 | orchestrator | Monday 02 February 2026 03:29:19 +0000 (0:00:01.192) 0:03:33.399 ******* 2026-02-02 03:29:35.467436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 03:29:35.467448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 03:29:35.467457 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 03:29:35.467466 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:35.467475 | orchestrator | 2026-02-02 03:29:35.467484 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-02 03:29:35.467493 | orchestrator | Monday 02 February 2026 03:29:20 +0000 (0:00:00.958) 0:03:34.357 ******* 2026-02-02 03:29:35.467502 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:35.467512 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:35.467521 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:35.467531 | orchestrator | 2026-02-02 03:29:35.467540 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-02 03:29:35.467550 | orchestrator | 2026-02-02 03:29:35.467559 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 03:29:35.467569 | orchestrator | Monday 02 February 2026 03:29:21 +0000 (0:00:00.912) 0:03:35.270 ******* 2026-02-02 03:29:35.467580 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:29:35.467591 | orchestrator | 2026-02-02 03:29:35.467601 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 03:29:35.467611 | orchestrator | Monday 02 February 2026 03:29:21 +0000 (0:00:00.552) 0:03:35.822 ******* 2026-02-02 03:29:35.467621 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:29:35.467630 | orchestrator | 2026-02-02 03:29:35.467641 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 03:29:35.467647 | orchestrator | Monday 02 February 2026 03:29:22 +0000 (0:00:00.855) 0:03:36.677 ******* 2026-02-02 03:29:35.467653 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:35.467659 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:35.467665 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:35.467671 | orchestrator | 2026-02-02 03:29:35.467677 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 03:29:35.467683 | orchestrator | Monday 02 February 2026 03:29:23 +0000 (0:00:00.729) 0:03:37.406 ******* 2026-02-02 03:29:35.467688 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:35.467694 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:35.467700 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:35.467706 | orchestrator | 2026-02-02 03:29:35.467712 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 03:29:35.467718 | orchestrator | Monday 02 February 2026 03:29:23 +0000 (0:00:00.593) 0:03:38.000 ******* 2026-02-02 03:29:35.467723 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:35.467729 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:35.467735 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:35.467740 | orchestrator | 2026-02-02 03:29:35.467746 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 03:29:35.467813 | orchestrator | Monday 02 February 2026 03:29:24 +0000 (0:00:00.328) 0:03:38.328 ******* 2026-02-02 03:29:35.467821 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:35.467828 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:35.467847 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:35.467854 | orchestrator | 2026-02-02 03:29:35.467861 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 03:29:35.467868 | orchestrator | Monday 02 February 2026 03:29:24 +0000 (0:00:00.350) 0:03:38.678 ******* 2026-02-02 03:29:35.467882 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:35.467889 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:35.467896 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:35.467903 | orchestrator | 2026-02-02 03:29:35.467910 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 03:29:35.467917 | orchestrator | Monday 02 February 2026 03:29:25 +0000 (0:00:00.784) 0:03:39.462 ******* 2026-02-02 03:29:35.467924 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:35.467930 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:35.467937 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:35.467944 | orchestrator | 2026-02-02 03:29:35.467951 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 03:29:35.467957 | orchestrator | Monday 02 February 2026 03:29:25 +0000 (0:00:00.667) 0:03:40.129 ******* 2026-02-02 03:29:35.467964 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:35.467971 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:35.467978 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:35.467985 | orchestrator | 2026-02-02 03:29:35.467992 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 03:29:35.467998 | orchestrator | Monday 02 February 2026 03:29:26 +0000 (0:00:00.338) 0:03:40.468 ******* 2026-02-02 03:29:35.468005 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:35.468012 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:35.468019 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:35.468025 | orchestrator | 2026-02-02 03:29:35.468032 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 03:29:35.468038 | orchestrator | Monday 02 February 2026 03:29:27 +0000 (0:00:00.726) 0:03:41.194 ******* 2026-02-02 03:29:35.468045 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:35.468052 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:35.468058 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:35.468065 | orchestrator | 2026-02-02 03:29:35.468091 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 03:29:35.468098 | orchestrator | Monday 02 February 2026 03:29:27 +0000 (0:00:00.719) 0:03:41.913 ******* 2026-02-02 03:29:35.468105 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:35.468112 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:35.468119 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:35.468126 | orchestrator | 2026-02-02 03:29:35.468132 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 03:29:35.468138 | orchestrator | Monday 02 February 2026 03:29:28 +0000 (0:00:00.590) 0:03:42.503 ******* 2026-02-02 03:29:35.468143 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:35.468149 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:35.468155 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:35.468161 | orchestrator | 2026-02-02 03:29:35.468167 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 03:29:35.468173 | orchestrator | Monday 02 February 2026 03:29:28 +0000 (0:00:00.344) 0:03:42.848 ******* 2026-02-02 03:29:35.468178 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:35.468184 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:35.468190 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:35.468196 | orchestrator | 2026-02-02 03:29:35.468202 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 03:29:35.468207 | orchestrator | Monday 02 February 2026 03:29:29 +0000 (0:00:00.334) 0:03:43.183 ******* 2026-02-02 03:29:35.468213 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:35.468219 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:35.468225 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:35.468231 | orchestrator | 2026-02-02 03:29:35.468237 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 03:29:35.468242 | orchestrator | Monday 02 February 2026 03:29:29 +0000 (0:00:00.351) 0:03:43.534 ******* 2026-02-02 03:29:35.468248 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:35.468258 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:35.468264 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:35.468270 | orchestrator | 2026-02-02 03:29:35.468276 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 03:29:35.468281 | orchestrator | Monday 02 February 2026 03:29:29 +0000 (0:00:00.583) 0:03:44.118 ******* 2026-02-02 03:29:35.468287 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:35.468293 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:35.468299 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:35.468304 | orchestrator | 2026-02-02 03:29:35.468310 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 03:29:35.468316 | orchestrator | Monday 02 February 2026 03:29:30 +0000 (0:00:00.351) 0:03:44.470 ******* 2026-02-02 03:29:35.468322 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:35.468327 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:29:35.468333 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:29:35.468339 | orchestrator | 2026-02-02 03:29:35.468345 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 03:29:35.468351 | orchestrator | Monday 02 February 2026 03:29:30 +0000 (0:00:00.358) 0:03:44.828 ******* 2026-02-02 03:29:35.468356 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:35.468362 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:35.468368 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:35.468374 | orchestrator | 2026-02-02 03:29:35.468380 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 03:29:35.468385 | orchestrator | Monday 02 February 2026 03:29:31 +0000 (0:00:00.389) 0:03:45.217 ******* 2026-02-02 03:29:35.468391 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:35.468397 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:35.468402 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:35.468408 | orchestrator | 2026-02-02 03:29:35.468414 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 03:29:35.468420 | orchestrator | Monday 02 February 2026 03:29:31 +0000 (0:00:00.358) 0:03:45.576 ******* 2026-02-02 03:29:35.468426 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:35.468431 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:35.468437 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:35.468443 | orchestrator | 2026-02-02 03:29:35.468453 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-02 03:29:35.468459 | orchestrator | Monday 02 February 2026 03:29:32 +0000 (0:00:00.928) 0:03:46.505 ******* 2026-02-02 03:29:35.468464 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:35.468470 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:35.468476 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:35.468482 | orchestrator | 2026-02-02 03:29:35.468488 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-02 03:29:35.468494 | orchestrator | Monday 02 February 2026 03:29:32 +0000 (0:00:00.339) 0:03:46.844 ******* 2026-02-02 03:29:35.468500 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:29:35.468507 | orchestrator | 2026-02-02 03:29:35.468516 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-02 03:29:35.468526 | orchestrator | Monday 02 February 2026 03:29:33 +0000 (0:00:00.881) 0:03:47.726 ******* 2026-02-02 03:29:35.468535 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:29:35.468546 | orchestrator | 2026-02-02 03:29:35.468556 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-02 03:29:35.468565 | orchestrator | Monday 02 February 2026 03:29:33 +0000 (0:00:00.163) 0:03:47.889 ******* 2026-02-02 03:29:35.468575 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-02 03:29:35.468583 | orchestrator | 2026-02-02 03:29:35.468589 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-02 03:29:35.468595 | orchestrator | Monday 02 February 2026 03:29:34 +0000 (0:00:01.120) 0:03:49.010 ******* 2026-02-02 03:29:35.468605 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:29:35.468611 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:29:35.468617 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:29:35.468623 | orchestrator | 2026-02-02 03:29:35.468628 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-02 03:29:35.468634 | orchestrator | Monday 02 February 2026 03:29:35 +0000 (0:00:00.374) 0:03:49.384 ******* 2026-02-02 03:29:35.468644 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:46.082243 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:30:46.082343 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:30:46.082352 | orchestrator | 2026-02-02 03:30:46.082359 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-02 03:30:46.082367 | orchestrator | Monday 02 February 2026 03:29:35 +0000 (0:00:00.388) 0:03:49.772 ******* 2026-02-02 03:30:46.082373 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:30:46.082381 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:30:46.082387 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:30:46.082393 | orchestrator | 2026-02-02 03:30:46.082399 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-02 03:30:46.082405 | orchestrator | Monday 02 February 2026 03:29:37 +0000 (0:00:01.529) 0:03:51.302 ******* 2026-02-02 03:30:46.082411 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:30:46.082417 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:30:46.082423 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:30:46.082429 | orchestrator | 2026-02-02 03:30:46.082435 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-02 03:30:46.082441 | orchestrator | Monday 02 February 2026 03:29:37 +0000 (0:00:00.777) 0:03:52.080 ******* 2026-02-02 03:30:46.082448 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:30:46.082464 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:30:46.082482 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:30:46.082489 | orchestrator | 2026-02-02 03:30:46.082495 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-02 03:30:46.082502 | orchestrator | Monday 02 February 2026 03:29:38 +0000 (0:00:00.671) 0:03:52.751 ******* 2026-02-02 03:30:46.082508 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:46.082515 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:30:46.082521 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:30:46.082528 | orchestrator | 2026-02-02 03:30:46.082533 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-02 03:30:46.082539 | orchestrator | Monday 02 February 2026 03:29:39 +0000 (0:00:00.703) 0:03:53.455 ******* 2026-02-02 03:30:46.082546 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:30:46.082552 | orchestrator | 2026-02-02 03:30:46.082558 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-02 03:30:46.082564 | orchestrator | Monday 02 February 2026 03:29:41 +0000 (0:00:01.860) 0:03:55.315 ******* 2026-02-02 03:30:46.082570 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:46.082575 | orchestrator | 2026-02-02 03:30:46.082581 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-02 03:30:46.082587 | orchestrator | Monday 02 February 2026 03:29:41 +0000 (0:00:00.753) 0:03:56.069 ******* 2026-02-02 03:30:46.082593 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 03:30:46.082599 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:30:46.082605 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:30:46.082613 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 03:30:46.082620 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-02 03:30:46.082628 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 03:30:46.082634 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 03:30:46.082641 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-02 03:30:46.082647 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 03:30:46.082677 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-02 03:30:46.082684 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-02 03:30:46.082690 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-02 03:30:46.082696 | orchestrator | 2026-02-02 03:30:46.082703 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-02 03:30:46.082709 | orchestrator | Monday 02 February 2026 03:29:45 +0000 (0:00:03.079) 0:03:59.149 ******* 2026-02-02 03:30:46.082715 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:30:46.082722 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:30:46.082741 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:30:46.082747 | orchestrator | 2026-02-02 03:30:46.082753 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-02 03:30:46.082759 | orchestrator | Monday 02 February 2026 03:29:46 +0000 (0:00:01.190) 0:04:00.340 ******* 2026-02-02 03:30:46.082766 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:46.082772 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:30:46.082778 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:30:46.082784 | orchestrator | 2026-02-02 03:30:46.082815 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-02 03:30:46.082825 | orchestrator | Monday 02 February 2026 03:29:46 +0000 (0:00:00.334) 0:04:00.675 ******* 2026-02-02 03:30:46.082834 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:46.082842 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:30:46.082851 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:30:46.082859 | orchestrator | 2026-02-02 03:30:46.082868 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-02 03:30:46.082877 | orchestrator | Monday 02 February 2026 03:29:47 +0000 (0:00:00.657) 0:04:01.332 ******* 2026-02-02 03:30:46.082886 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:30:46.082895 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:30:46.082903 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:30:46.082912 | orchestrator | 2026-02-02 03:30:46.082921 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-02 03:30:46.082929 | orchestrator | Monday 02 February 2026 03:29:48 +0000 (0:00:01.536) 0:04:02.868 ******* 2026-02-02 03:30:46.082937 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:30:46.082945 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:30:46.082954 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:30:46.082964 | orchestrator | 2026-02-02 03:30:46.082972 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-02 03:30:46.082980 | orchestrator | Monday 02 February 2026 03:29:49 +0000 (0:00:01.200) 0:04:04.068 ******* 2026-02-02 03:30:46.082988 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:46.082997 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:46.083021 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:46.083030 | orchestrator | 2026-02-02 03:30:46.083038 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-02 03:30:46.083047 | orchestrator | Monday 02 February 2026 03:29:50 +0000 (0:00:00.352) 0:04:04.421 ******* 2026-02-02 03:30:46.083057 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:30:46.083066 | orchestrator | 2026-02-02 03:30:46.083074 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-02 03:30:46.083083 | orchestrator | Monday 02 February 2026 03:29:51 +0000 (0:00:00.893) 0:04:05.314 ******* 2026-02-02 03:30:46.083091 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:46.083100 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:46.083109 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:46.083117 | orchestrator | 2026-02-02 03:30:46.083126 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-02 03:30:46.083135 | orchestrator | Monday 02 February 2026 03:29:51 +0000 (0:00:00.334) 0:04:05.649 ******* 2026-02-02 03:30:46.083144 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:46.083162 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:46.083168 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:46.083174 | orchestrator | 2026-02-02 03:30:46.083180 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-02 03:30:46.083187 | orchestrator | Monday 02 February 2026 03:29:51 +0000 (0:00:00.346) 0:04:05.996 ******* 2026-02-02 03:30:46.083193 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:30:46.083200 | orchestrator | 2026-02-02 03:30:46.083207 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-02 03:30:46.083213 | orchestrator | Monday 02 February 2026 03:29:52 +0000 (0:00:00.872) 0:04:06.868 ******* 2026-02-02 03:30:46.083219 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:30:46.083225 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:30:46.083231 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:30:46.083237 | orchestrator | 2026-02-02 03:30:46.083243 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-02 03:30:46.083249 | orchestrator | Monday 02 February 2026 03:29:54 +0000 (0:00:01.508) 0:04:08.377 ******* 2026-02-02 03:30:46.083255 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:30:46.083262 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:30:46.083268 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:30:46.083274 | orchestrator | 2026-02-02 03:30:46.083280 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-02 03:30:46.083286 | orchestrator | Monday 02 February 2026 03:29:55 +0000 (0:00:01.127) 0:04:09.505 ******* 2026-02-02 03:30:46.083297 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:30:46.083309 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:30:46.083314 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:30:46.083320 | orchestrator | 2026-02-02 03:30:46.083326 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-02 03:30:46.083339 | orchestrator | Monday 02 February 2026 03:29:57 +0000 (0:00:02.019) 0:04:11.524 ******* 2026-02-02 03:30:46.083355 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:30:46.083372 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:30:46.083386 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:30:46.083401 | orchestrator | 2026-02-02 03:30:46.083414 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-02 03:30:46.083429 | orchestrator | Monday 02 February 2026 03:29:59 +0000 (0:00:02.031) 0:04:13.556 ******* 2026-02-02 03:30:46.083443 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:30:46.083457 | orchestrator | 2026-02-02 03:30:46.083470 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-02 03:30:46.083484 | orchestrator | Monday 02 February 2026 03:29:59 +0000 (0:00:00.569) 0:04:14.125 ******* 2026-02-02 03:30:46.083500 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-02 03:30:46.083506 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:46.083513 | orchestrator | 2026-02-02 03:30:46.083519 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-02 03:30:46.083524 | orchestrator | Monday 02 February 2026 03:30:22 +0000 (0:00:22.136) 0:04:36.262 ******* 2026-02-02 03:30:46.083530 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:30:46.083536 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:46.083542 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:30:46.083554 | orchestrator | 2026-02-02 03:30:46.083565 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-02 03:30:46.083571 | orchestrator | Monday 02 February 2026 03:30:31 +0000 (0:00:09.034) 0:04:45.297 ******* 2026-02-02 03:30:46.083577 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:46.083584 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:46.083590 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:46.083604 | orchestrator | 2026-02-02 03:30:46.083610 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-02 03:30:46.083617 | orchestrator | Monday 02 February 2026 03:30:31 +0000 (0:00:00.317) 0:04:45.614 ******* 2026-02-02 03:30:46.083626 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f6d8efd809aa3b959e4837edf24435e678e811c9'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-02 03:30:46.083645 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f6d8efd809aa3b959e4837edf24435e678e811c9'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-02 03:30:58.660470 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f6d8efd809aa3b959e4837edf24435e678e811c9'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-02 03:30:58.660571 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f6d8efd809aa3b959e4837edf24435e678e811c9'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-02 03:30:58.660580 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f6d8efd809aa3b959e4837edf24435e678e811c9'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-02 03:30:58.660588 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f6d8efd809aa3b959e4837edf24435e678e811c9'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__f6d8efd809aa3b959e4837edf24435e678e811c9'}])  2026-02-02 03:30:58.660597 | orchestrator | 2026-02-02 03:30:58.660605 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 03:30:58.660614 | orchestrator | Monday 02 February 2026 03:30:46 +0000 (0:00:14.597) 0:05:00.212 ******* 2026-02-02 03:30:58.660620 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:58.660628 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:58.660635 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:58.660641 | orchestrator | 2026-02-02 03:30:58.660647 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-02 03:30:58.660653 | orchestrator | Monday 02 February 2026 03:30:46 +0000 (0:00:00.403) 0:05:00.616 ******* 2026-02-02 03:30:58.660660 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:30:58.660667 | orchestrator | 2026-02-02 03:30:58.660673 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-02 03:30:58.660679 | orchestrator | Monday 02 February 2026 03:30:47 +0000 (0:00:00.591) 0:05:01.208 ******* 2026-02-02 03:30:58.660685 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:58.660693 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:30:58.660700 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:30:58.660706 | orchestrator | 2026-02-02 03:30:58.660711 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-02 03:30:58.660740 | orchestrator | Monday 02 February 2026 03:30:47 +0000 (0:00:00.646) 0:05:01.854 ******* 2026-02-02 03:30:58.660807 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:58.660814 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:58.660820 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:58.660826 | orchestrator | 2026-02-02 03:30:58.660832 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-02 03:30:58.660838 | orchestrator | Monday 02 February 2026 03:30:48 +0000 (0:00:00.407) 0:05:02.262 ******* 2026-02-02 03:30:58.660844 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 03:30:58.660851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 03:30:58.660857 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 03:30:58.660862 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:58.660928 | orchestrator | 2026-02-02 03:30:58.660935 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-02 03:30:58.660941 | orchestrator | Monday 02 February 2026 03:30:48 +0000 (0:00:00.697) 0:05:02.959 ******* 2026-02-02 03:30:58.660947 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:58.660953 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:30:58.660959 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:30:58.660965 | orchestrator | 2026-02-02 03:30:58.660971 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-02 03:30:58.660977 | orchestrator | 2026-02-02 03:30:58.660984 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 03:30:58.660989 | orchestrator | Monday 02 February 2026 03:30:49 +0000 (0:00:00.876) 0:05:03.836 ******* 2026-02-02 03:30:58.660997 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:30:58.661005 | orchestrator | 2026-02-02 03:30:58.661011 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 03:30:58.661016 | orchestrator | Monday 02 February 2026 03:30:50 +0000 (0:00:00.559) 0:05:04.395 ******* 2026-02-02 03:30:58.661023 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:30:58.661029 | orchestrator | 2026-02-02 03:30:58.661036 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 03:30:58.661057 | orchestrator | Monday 02 February 2026 03:30:51 +0000 (0:00:00.861) 0:05:05.256 ******* 2026-02-02 03:30:58.661064 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:58.661070 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:30:58.661077 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:30:58.661083 | orchestrator | 2026-02-02 03:30:58.661089 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 03:30:58.661096 | orchestrator | Monday 02 February 2026 03:30:51 +0000 (0:00:00.771) 0:05:06.028 ******* 2026-02-02 03:30:58.661102 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:58.661109 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:58.661116 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:58.661123 | orchestrator | 2026-02-02 03:30:58.661129 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 03:30:58.661136 | orchestrator | Monday 02 February 2026 03:30:52 +0000 (0:00:00.393) 0:05:06.422 ******* 2026-02-02 03:30:58.661142 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:58.661149 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:58.661156 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:58.661162 | orchestrator | 2026-02-02 03:30:58.661168 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 03:30:58.661175 | orchestrator | Monday 02 February 2026 03:30:52 +0000 (0:00:00.324) 0:05:06.746 ******* 2026-02-02 03:30:58.661181 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:58.661188 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:58.661204 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:58.661210 | orchestrator | 2026-02-02 03:30:58.661217 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 03:30:58.661223 | orchestrator | Monday 02 February 2026 03:30:53 +0000 (0:00:00.594) 0:05:07.340 ******* 2026-02-02 03:30:58.661229 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:58.661236 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:30:58.661243 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:30:58.661249 | orchestrator | 2026-02-02 03:30:58.661255 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 03:30:58.661262 | orchestrator | Monday 02 February 2026 03:30:53 +0000 (0:00:00.719) 0:05:08.060 ******* 2026-02-02 03:30:58.661269 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:58.661275 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:58.661282 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:58.661289 | orchestrator | 2026-02-02 03:30:58.661295 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 03:30:58.661302 | orchestrator | Monday 02 February 2026 03:30:54 +0000 (0:00:00.339) 0:05:08.399 ******* 2026-02-02 03:30:58.661308 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:58.661315 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:58.661322 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:58.661328 | orchestrator | 2026-02-02 03:30:58.661335 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 03:30:58.661341 | orchestrator | Monday 02 February 2026 03:30:54 +0000 (0:00:00.330) 0:05:08.729 ******* 2026-02-02 03:30:58.661348 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:58.661354 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:30:58.661361 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:30:58.661368 | orchestrator | 2026-02-02 03:30:58.661375 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 03:30:58.661382 | orchestrator | Monday 02 February 2026 03:30:55 +0000 (0:00:01.032) 0:05:09.761 ******* 2026-02-02 03:30:58.661388 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:58.661394 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:30:58.661401 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:30:58.661406 | orchestrator | 2026-02-02 03:30:58.661412 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 03:30:58.661419 | orchestrator | Monday 02 February 2026 03:30:56 +0000 (0:00:00.767) 0:05:10.529 ******* 2026-02-02 03:30:58.661425 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:58.661431 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:58.661441 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:58.661448 | orchestrator | 2026-02-02 03:30:58.661455 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 03:30:58.661461 | orchestrator | Monday 02 February 2026 03:30:56 +0000 (0:00:00.339) 0:05:10.869 ******* 2026-02-02 03:30:58.661467 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:30:58.661473 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:30:58.661479 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:30:58.661485 | orchestrator | 2026-02-02 03:30:58.661491 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 03:30:58.661497 | orchestrator | Monday 02 February 2026 03:30:57 +0000 (0:00:00.343) 0:05:11.213 ******* 2026-02-02 03:30:58.661503 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:58.661509 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:58.661515 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:58.661521 | orchestrator | 2026-02-02 03:30:58.661528 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 03:30:58.661535 | orchestrator | Monday 02 February 2026 03:30:57 +0000 (0:00:00.595) 0:05:11.808 ******* 2026-02-02 03:30:58.661541 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:58.661547 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:58.661554 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:58.661559 | orchestrator | 2026-02-02 03:30:58.661570 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 03:30:58.661576 | orchestrator | Monday 02 February 2026 03:30:57 +0000 (0:00:00.333) 0:05:12.141 ******* 2026-02-02 03:30:58.661582 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:58.661588 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:58.661594 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:58.661600 | orchestrator | 2026-02-02 03:30:58.661605 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 03:30:58.661611 | orchestrator | Monday 02 February 2026 03:30:58 +0000 (0:00:00.319) 0:05:12.461 ******* 2026-02-02 03:30:58.661617 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:30:58.661622 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:30:58.661628 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:30:58.661634 | orchestrator | 2026-02-02 03:30:58.661640 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 03:30:58.661651 | orchestrator | Monday 02 February 2026 03:30:58 +0000 (0:00:00.328) 0:05:12.789 ******* 2026-02-02 03:31:53.956127 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:31:53.956285 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:31:53.956303 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:31:53.956316 | orchestrator | 2026-02-02 03:31:53.956330 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 03:31:53.956343 | orchestrator | Monday 02 February 2026 03:30:59 +0000 (0:00:00.628) 0:05:13.418 ******* 2026-02-02 03:31:53.956355 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:31:53.956368 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:31:53.956381 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:31:53.956392 | orchestrator | 2026-02-02 03:31:53.956404 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 03:31:53.956416 | orchestrator | Monday 02 February 2026 03:30:59 +0000 (0:00:00.365) 0:05:13.783 ******* 2026-02-02 03:31:53.956428 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:31:53.956440 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:31:53.956452 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:31:53.956464 | orchestrator | 2026-02-02 03:31:53.956476 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 03:31:53.956488 | orchestrator | Monday 02 February 2026 03:30:59 +0000 (0:00:00.353) 0:05:14.137 ******* 2026-02-02 03:31:53.956498 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:31:53.956509 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:31:53.956520 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:31:53.956530 | orchestrator | 2026-02-02 03:31:53.956541 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-02 03:31:53.956552 | orchestrator | Monday 02 February 2026 03:31:00 +0000 (0:00:00.902) 0:05:15.039 ******* 2026-02-02 03:31:53.956563 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 03:31:53.956573 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 03:31:53.956584 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 03:31:53.956594 | orchestrator | 2026-02-02 03:31:53.956605 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-02 03:31:53.956616 | orchestrator | Monday 02 February 2026 03:31:01 +0000 (0:00:00.725) 0:05:15.765 ******* 2026-02-02 03:31:53.956627 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:31:53.956641 | orchestrator | 2026-02-02 03:31:53.956652 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-02 03:31:53.956664 | orchestrator | Monday 02 February 2026 03:31:02 +0000 (0:00:00.626) 0:05:16.391 ******* 2026-02-02 03:31:53.956677 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:31:53.956690 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:31:53.956703 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:31:53.956716 | orchestrator | 2026-02-02 03:31:53.956729 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-02 03:31:53.956770 | orchestrator | Monday 02 February 2026 03:31:02 +0000 (0:00:00.668) 0:05:17.059 ******* 2026-02-02 03:31:53.956784 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:31:53.956796 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:31:53.956809 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:31:53.956822 | orchestrator | 2026-02-02 03:31:53.956835 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-02 03:31:53.956849 | orchestrator | Monday 02 February 2026 03:31:03 +0000 (0:00:00.646) 0:05:17.706 ******* 2026-02-02 03:31:53.956861 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 03:31:53.956874 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 03:31:53.956886 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 03:31:53.956899 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-02 03:31:53.956912 | orchestrator | 2026-02-02 03:31:53.956939 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-02 03:31:53.956953 | orchestrator | Monday 02 February 2026 03:31:13 +0000 (0:00:09.903) 0:05:27.609 ******* 2026-02-02 03:31:53.956965 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:31:53.956978 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:31:53.956989 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:31:53.957001 | orchestrator | 2026-02-02 03:31:53.957013 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-02 03:31:53.957025 | orchestrator | Monday 02 February 2026 03:31:13 +0000 (0:00:00.401) 0:05:28.011 ******* 2026-02-02 03:31:53.957037 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-02 03:31:53.957047 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-02 03:31:53.957059 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-02 03:31:53.957069 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-02 03:31:53.957080 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:31:53.957091 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:31:53.957102 | orchestrator | 2026-02-02 03:31:53.957113 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-02 03:31:53.957124 | orchestrator | Monday 02 February 2026 03:31:16 +0000 (0:00:02.175) 0:05:30.186 ******* 2026-02-02 03:31:53.957135 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-02 03:31:53.957146 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-02 03:31:53.957157 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-02 03:31:53.957167 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 03:31:53.957178 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-02 03:31:53.957189 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-02 03:31:53.957217 | orchestrator | 2026-02-02 03:31:53.957228 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-02 03:31:53.957240 | orchestrator | Monday 02 February 2026 03:31:17 +0000 (0:00:01.515) 0:05:31.701 ******* 2026-02-02 03:31:53.957251 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:31:53.957262 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:31:53.957273 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:31:53.957284 | orchestrator | 2026-02-02 03:31:53.957313 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-02 03:31:53.957324 | orchestrator | Monday 02 February 2026 03:31:18 +0000 (0:00:00.737) 0:05:32.439 ******* 2026-02-02 03:31:53.957336 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:31:53.957347 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:31:53.957358 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:31:53.957369 | orchestrator | 2026-02-02 03:31:53.957381 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-02 03:31:53.957393 | orchestrator | Monday 02 February 2026 03:31:18 +0000 (0:00:00.339) 0:05:32.778 ******* 2026-02-02 03:31:53.957414 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:31:53.957426 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:31:53.957437 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:31:53.957449 | orchestrator | 2026-02-02 03:31:53.957461 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-02 03:31:53.957472 | orchestrator | Monday 02 February 2026 03:31:18 +0000 (0:00:00.363) 0:05:33.142 ******* 2026-02-02 03:31:53.957484 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:31:53.957496 | orchestrator | 2026-02-02 03:31:53.957508 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-02 03:31:53.957519 | orchestrator | Monday 02 February 2026 03:31:19 +0000 (0:00:00.895) 0:05:34.037 ******* 2026-02-02 03:31:53.957531 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:31:53.957543 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:31:53.957554 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:31:53.957566 | orchestrator | 2026-02-02 03:31:53.957578 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-02 03:31:53.957589 | orchestrator | Monday 02 February 2026 03:31:20 +0000 (0:00:00.379) 0:05:34.416 ******* 2026-02-02 03:31:53.957601 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:31:53.957612 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:31:53.957624 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:31:53.957635 | orchestrator | 2026-02-02 03:31:53.957647 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-02 03:31:53.957658 | orchestrator | Monday 02 February 2026 03:31:20 +0000 (0:00:00.371) 0:05:34.787 ******* 2026-02-02 03:31:53.957668 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:31:53.957680 | orchestrator | 2026-02-02 03:31:53.957692 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-02 03:31:53.957703 | orchestrator | Monday 02 February 2026 03:31:21 +0000 (0:00:00.925) 0:05:35.713 ******* 2026-02-02 03:31:53.957715 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:31:53.957727 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:31:53.957738 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:31:53.957750 | orchestrator | 2026-02-02 03:31:53.957762 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-02 03:31:53.957773 | orchestrator | Monday 02 February 2026 03:31:22 +0000 (0:00:01.262) 0:05:36.975 ******* 2026-02-02 03:31:53.957785 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:31:53.957796 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:31:53.957806 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:31:53.957817 | orchestrator | 2026-02-02 03:31:53.957828 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-02 03:31:53.957839 | orchestrator | Monday 02 February 2026 03:31:24 +0000 (0:00:01.175) 0:05:38.150 ******* 2026-02-02 03:31:53.957850 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:31:53.957860 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:31:53.957871 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:31:53.957882 | orchestrator | 2026-02-02 03:31:53.957893 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-02 03:31:53.957908 | orchestrator | Monday 02 February 2026 03:31:26 +0000 (0:00:02.055) 0:05:40.206 ******* 2026-02-02 03:31:53.957919 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:31:53.957930 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:31:53.957941 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:31:53.957952 | orchestrator | 2026-02-02 03:31:53.957962 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-02 03:31:53.957973 | orchestrator | Monday 02 February 2026 03:31:28 +0000 (0:00:02.005) 0:05:42.211 ******* 2026-02-02 03:31:53.957984 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:31:53.957995 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:31:53.958006 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-02 03:31:53.958140 | orchestrator | 2026-02-02 03:31:53.958154 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-02 03:31:53.958166 | orchestrator | Monday 02 February 2026 03:31:28 +0000 (0:00:00.459) 0:05:42.670 ******* 2026-02-02 03:31:53.958176 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-02 03:31:53.958187 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-02 03:31:53.958214 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-02 03:31:53.958226 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-02 03:31:53.958236 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:31:53.958246 | orchestrator | 2026-02-02 03:31:53.958257 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-02 03:31:53.958268 | orchestrator | Monday 02 February 2026 03:31:52 +0000 (0:00:24.195) 0:06:06.866 ******* 2026-02-02 03:31:53.958279 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:31:53.958289 | orchestrator | 2026-02-02 03:31:53.958300 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-02 03:31:53.958320 | orchestrator | Monday 02 February 2026 03:31:53 +0000 (0:00:01.219) 0:06:08.085 ******* 2026-02-02 03:32:21.267856 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:32:21.267988 | orchestrator | 2026-02-02 03:32:21.268005 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-02 03:32:21.268017 | orchestrator | Monday 02 February 2026 03:31:54 +0000 (0:00:00.324) 0:06:08.410 ******* 2026-02-02 03:32:21.268027 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:32:21.268037 | orchestrator | 2026-02-02 03:32:21.268047 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-02 03:32:21.268058 | orchestrator | Monday 02 February 2026 03:31:54 +0000 (0:00:00.439) 0:06:08.849 ******* 2026-02-02 03:32:21.268067 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-02 03:32:21.268078 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-02 03:32:21.268088 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-02 03:32:21.268098 | orchestrator | 2026-02-02 03:32:21.268108 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-02 03:32:21.268118 | orchestrator | Monday 02 February 2026 03:32:01 +0000 (0:00:06.371) 0:06:15.221 ******* 2026-02-02 03:32:21.268132 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-02 03:32:21.268148 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-02 03:32:21.268173 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-02 03:32:21.268191 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-02 03:32:21.268207 | orchestrator | 2026-02-02 03:32:21.268223 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 03:32:21.268239 | orchestrator | Monday 02 February 2026 03:32:05 +0000 (0:00:04.751) 0:06:19.972 ******* 2026-02-02 03:32:21.268255 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:32:21.268270 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:32:21.268285 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:32:21.268301 | orchestrator | 2026-02-02 03:32:21.268317 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-02 03:32:21.268330 | orchestrator | Monday 02 February 2026 03:32:06 +0000 (0:00:00.752) 0:06:20.725 ******* 2026-02-02 03:32:21.268347 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:32:21.268398 | orchestrator | 2026-02-02 03:32:21.268485 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-02 03:32:21.268505 | orchestrator | Monday 02 February 2026 03:32:07 +0000 (0:00:00.896) 0:06:21.622 ******* 2026-02-02 03:32:21.268522 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:32:21.268539 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:32:21.268555 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:32:21.268573 | orchestrator | 2026-02-02 03:32:21.268591 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-02 03:32:21.268609 | orchestrator | Monday 02 February 2026 03:32:07 +0000 (0:00:00.383) 0:06:22.006 ******* 2026-02-02 03:32:21.268626 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:32:21.268642 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:32:21.268658 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:32:21.268675 | orchestrator | 2026-02-02 03:32:21.268686 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-02 03:32:21.268696 | orchestrator | Monday 02 February 2026 03:32:09 +0000 (0:00:01.144) 0:06:23.150 ******* 2026-02-02 03:32:21.268706 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 03:32:21.268717 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 03:32:21.268743 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 03:32:21.268753 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:32:21.268763 | orchestrator | 2026-02-02 03:32:21.268772 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-02 03:32:21.268782 | orchestrator | Monday 02 February 2026 03:32:10 +0000 (0:00:01.300) 0:06:24.451 ******* 2026-02-02 03:32:21.268792 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:32:21.268802 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:32:21.268811 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:32:21.268821 | orchestrator | 2026-02-02 03:32:21.268830 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-02 03:32:21.268840 | orchestrator | 2026-02-02 03:32:21.268850 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 03:32:21.268860 | orchestrator | Monday 02 February 2026 03:32:10 +0000 (0:00:00.605) 0:06:25.056 ******* 2026-02-02 03:32:21.268871 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:32:21.268882 | orchestrator | 2026-02-02 03:32:21.268892 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 03:32:21.268901 | orchestrator | Monday 02 February 2026 03:32:11 +0000 (0:00:00.800) 0:06:25.857 ******* 2026-02-02 03:32:21.268911 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:32:21.268921 | orchestrator | 2026-02-02 03:32:21.268931 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 03:32:21.268940 | orchestrator | Monday 02 February 2026 03:32:12 +0000 (0:00:00.595) 0:06:26.452 ******* 2026-02-02 03:32:21.268950 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:32:21.268960 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:32:21.268970 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:32:21.268979 | orchestrator | 2026-02-02 03:32:21.268989 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 03:32:21.268999 | orchestrator | Monday 02 February 2026 03:32:12 +0000 (0:00:00.315) 0:06:26.768 ******* 2026-02-02 03:32:21.269008 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:32:21.269019 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:32:21.269035 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:32:21.269056 | orchestrator | 2026-02-02 03:32:21.269102 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 03:32:21.269119 | orchestrator | Monday 02 February 2026 03:32:13 +0000 (0:00:01.001) 0:06:27.770 ******* 2026-02-02 03:32:21.269134 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:32:21.269149 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:32:21.269177 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:32:21.269190 | orchestrator | 2026-02-02 03:32:21.269205 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 03:32:21.269221 | orchestrator | Monday 02 February 2026 03:32:14 +0000 (0:00:00.706) 0:06:28.477 ******* 2026-02-02 03:32:21.269234 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:32:21.269250 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:32:21.269264 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:32:21.269279 | orchestrator | 2026-02-02 03:32:21.269294 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 03:32:21.269308 | orchestrator | Monday 02 February 2026 03:32:15 +0000 (0:00:00.725) 0:06:29.203 ******* 2026-02-02 03:32:21.269324 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:32:21.269340 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:32:21.269384 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:32:21.269399 | orchestrator | 2026-02-02 03:32:21.269416 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 03:32:21.269434 | orchestrator | Monday 02 February 2026 03:32:15 +0000 (0:00:00.325) 0:06:29.528 ******* 2026-02-02 03:32:21.269451 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:32:21.269468 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:32:21.269481 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:32:21.269491 | orchestrator | 2026-02-02 03:32:21.269501 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 03:32:21.269511 | orchestrator | Monday 02 February 2026 03:32:15 +0000 (0:00:00.611) 0:06:30.139 ******* 2026-02-02 03:32:21.269520 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:32:21.269530 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:32:21.269540 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:32:21.269550 | orchestrator | 2026-02-02 03:32:21.269560 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 03:32:21.269569 | orchestrator | Monday 02 February 2026 03:32:16 +0000 (0:00:00.337) 0:06:30.476 ******* 2026-02-02 03:32:21.269582 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:32:21.269598 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:32:21.269615 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:32:21.269630 | orchestrator | 2026-02-02 03:32:21.269646 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 03:32:21.269661 | orchestrator | Monday 02 February 2026 03:32:17 +0000 (0:00:00.741) 0:06:31.218 ******* 2026-02-02 03:32:21.269676 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:32:21.269692 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:32:21.269709 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:32:21.269726 | orchestrator | 2026-02-02 03:32:21.269742 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 03:32:21.269759 | orchestrator | Monday 02 February 2026 03:32:17 +0000 (0:00:00.717) 0:06:31.935 ******* 2026-02-02 03:32:21.269776 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:32:21.269788 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:32:21.269798 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:32:21.269808 | orchestrator | 2026-02-02 03:32:21.269818 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 03:32:21.269827 | orchestrator | Monday 02 February 2026 03:32:18 +0000 (0:00:00.614) 0:06:32.549 ******* 2026-02-02 03:32:21.269837 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:32:21.269847 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:32:21.269857 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:32:21.269866 | orchestrator | 2026-02-02 03:32:21.269876 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 03:32:21.269903 | orchestrator | Monday 02 February 2026 03:32:18 +0000 (0:00:00.364) 0:06:32.914 ******* 2026-02-02 03:32:21.269914 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:32:21.269924 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:32:21.269933 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:32:21.269943 | orchestrator | 2026-02-02 03:32:21.269969 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 03:32:21.269983 | orchestrator | Monday 02 February 2026 03:32:19 +0000 (0:00:00.363) 0:06:33.278 ******* 2026-02-02 03:32:21.270002 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:32:21.270110 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:32:21.270130 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:32:21.270145 | orchestrator | 2026-02-02 03:32:21.270160 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 03:32:21.270174 | orchestrator | Monday 02 February 2026 03:32:19 +0000 (0:00:00.346) 0:06:33.625 ******* 2026-02-02 03:32:21.270190 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:32:21.270205 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:32:21.270221 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:32:21.270237 | orchestrator | 2026-02-02 03:32:21.270253 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 03:32:21.270269 | orchestrator | Monday 02 February 2026 03:32:20 +0000 (0:00:00.689) 0:06:34.314 ******* 2026-02-02 03:32:21.270286 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:32:21.270303 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:32:21.270319 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:32:21.270336 | orchestrator | 2026-02-02 03:32:21.270408 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 03:32:21.270430 | orchestrator | Monday 02 February 2026 03:32:20 +0000 (0:00:00.384) 0:06:34.699 ******* 2026-02-02 03:32:21.270446 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:32:21.270463 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:32:21.270480 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:32:21.270496 | orchestrator | 2026-02-02 03:32:21.270511 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 03:32:21.270529 | orchestrator | Monday 02 February 2026 03:32:20 +0000 (0:00:00.360) 0:06:35.059 ******* 2026-02-02 03:32:21.270545 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:32:21.270563 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:32:21.270580 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:32:21.270596 | orchestrator | 2026-02-02 03:32:21.270613 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 03:32:21.270650 | orchestrator | Monday 02 February 2026 03:32:21 +0000 (0:00:00.337) 0:06:35.397 ******* 2026-02-02 03:33:17.654604 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:33:17.654789 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:33:17.654804 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:33:17.654812 | orchestrator | 2026-02-02 03:33:17.654820 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 03:33:17.654829 | orchestrator | Monday 02 February 2026 03:32:21 +0000 (0:00:00.740) 0:06:36.138 ******* 2026-02-02 03:33:17.654835 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:33:17.654841 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:33:17.654847 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:33:17.654852 | orchestrator | 2026-02-02 03:33:17.654859 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-02 03:33:17.654865 | orchestrator | Monday 02 February 2026 03:32:22 +0000 (0:00:00.583) 0:06:36.721 ******* 2026-02-02 03:33:17.654872 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:33:17.654878 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:33:17.654884 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:33:17.654889 | orchestrator | 2026-02-02 03:33:17.654896 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-02 03:33:17.654902 | orchestrator | Monday 02 February 2026 03:32:22 +0000 (0:00:00.358) 0:06:37.080 ******* 2026-02-02 03:33:17.654909 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 03:33:17.654916 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 03:33:17.654922 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 03:33:17.654955 | orchestrator | 2026-02-02 03:33:17.654961 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-02 03:33:17.654967 | orchestrator | Monday 02 February 2026 03:32:24 +0000 (0:00:01.291) 0:06:38.372 ******* 2026-02-02 03:33:17.654974 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:33:17.654980 | orchestrator | 2026-02-02 03:33:17.654986 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-02 03:33:17.654992 | orchestrator | Monday 02 February 2026 03:32:24 +0000 (0:00:00.740) 0:06:39.112 ******* 2026-02-02 03:33:17.654998 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:17.655006 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:33:17.655011 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:33:17.655017 | orchestrator | 2026-02-02 03:33:17.655023 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-02 03:33:17.655029 | orchestrator | Monday 02 February 2026 03:32:25 +0000 (0:00:00.352) 0:06:39.465 ******* 2026-02-02 03:33:17.655034 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:17.655040 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:33:17.655046 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:33:17.655052 | orchestrator | 2026-02-02 03:33:17.655058 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-02 03:33:17.655064 | orchestrator | Monday 02 February 2026 03:32:25 +0000 (0:00:00.608) 0:06:40.074 ******* 2026-02-02 03:33:17.655070 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:33:17.655076 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:33:17.655082 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:33:17.655088 | orchestrator | 2026-02-02 03:33:17.655094 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-02 03:33:17.655100 | orchestrator | Monday 02 February 2026 03:32:26 +0000 (0:00:00.679) 0:06:40.753 ******* 2026-02-02 03:33:17.655105 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:33:17.655112 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:33:17.655118 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:33:17.655123 | orchestrator | 2026-02-02 03:33:17.655304 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-02 03:33:17.655415 | orchestrator | Monday 02 February 2026 03:32:26 +0000 (0:00:00.368) 0:06:41.122 ******* 2026-02-02 03:33:17.655421 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-02 03:33:17.655427 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-02 03:33:17.655431 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-02 03:33:17.655436 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-02 03:33:17.655441 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-02 03:33:17.655445 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-02 03:33:17.655449 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-02 03:33:17.655453 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-02 03:33:17.655457 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-02 03:33:17.655461 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-02 03:33:17.655464 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-02 03:33:17.655468 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-02 03:33:17.655472 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-02 03:33:17.655476 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-02 03:33:17.655497 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-02 03:33:17.655501 | orchestrator | 2026-02-02 03:33:17.655506 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-02 03:33:17.655544 | orchestrator | Monday 02 February 2026 03:32:28 +0000 (0:00:01.868) 0:06:42.991 ******* 2026-02-02 03:33:17.655549 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:17.655555 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:33:17.655559 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:33:17.655562 | orchestrator | 2026-02-02 03:33:17.655566 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-02 03:33:17.655570 | orchestrator | Monday 02 February 2026 03:32:29 +0000 (0:00:00.612) 0:06:43.604 ******* 2026-02-02 03:33:17.655574 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:33:17.655578 | orchestrator | 2026-02-02 03:33:17.655582 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-02 03:33:17.655586 | orchestrator | Monday 02 February 2026 03:32:30 +0000 (0:00:00.564) 0:06:44.169 ******* 2026-02-02 03:33:17.655589 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-02 03:33:17.655594 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-02 03:33:17.655597 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-02 03:33:17.655602 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-02 03:33:17.655606 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-02 03:33:17.655610 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-02 03:33:17.655614 | orchestrator | 2026-02-02 03:33:17.655617 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-02 03:33:17.655621 | orchestrator | Monday 02 February 2026 03:32:30 +0000 (0:00:00.926) 0:06:45.095 ******* 2026-02-02 03:33:17.655625 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:33:17.655629 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 03:33:17.655633 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 03:33:17.655675 | orchestrator | 2026-02-02 03:33:17.655682 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-02 03:33:17.655689 | orchestrator | Monday 02 February 2026 03:32:33 +0000 (0:00:02.258) 0:06:47.353 ******* 2026-02-02 03:33:17.655695 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 03:33:17.655703 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 03:33:17.655711 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:33:17.655719 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 03:33:17.655726 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-02 03:33:17.655732 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:33:17.655738 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 03:33:17.655744 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-02 03:33:17.655751 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:33:17.655757 | orchestrator | 2026-02-02 03:33:17.655763 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-02 03:33:17.655769 | orchestrator | Monday 02 February 2026 03:32:34 +0000 (0:00:01.465) 0:06:48.819 ******* 2026-02-02 03:33:17.655775 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:33:17.655782 | orchestrator | 2026-02-02 03:33:17.655788 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-02 03:33:17.655794 | orchestrator | Monday 02 February 2026 03:32:36 +0000 (0:00:02.022) 0:06:50.842 ******* 2026-02-02 03:33:17.655812 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:33:17.655819 | orchestrator | 2026-02-02 03:33:17.655840 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-02 03:33:17.655847 | orchestrator | Monday 02 February 2026 03:32:37 +0000 (0:00:00.577) 0:06:51.420 ******* 2026-02-02 03:33:17.655854 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'}) 2026-02-02 03:33:17.655862 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'}) 2026-02-02 03:33:17.655869 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}) 2026-02-02 03:33:17.655874 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'}) 2026-02-02 03:33:17.655878 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'}) 2026-02-02 03:33:17.655882 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'}) 2026-02-02 03:33:17.655886 | orchestrator | 2026-02-02 03:33:17.655890 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-02 03:33:17.655894 | orchestrator | Monday 02 February 2026 03:33:16 +0000 (0:00:39.383) 0:07:30.803 ******* 2026-02-02 03:33:17.655897 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:17.655901 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:33:17.655905 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:33:17.655909 | orchestrator | 2026-02-02 03:33:17.655912 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-02 03:33:17.655916 | orchestrator | Monday 02 February 2026 03:33:17 +0000 (0:00:00.357) 0:07:31.160 ******* 2026-02-02 03:33:17.655926 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:33:56.641401 | orchestrator | 2026-02-02 03:33:56.641511 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-02 03:33:56.641525 | orchestrator | Monday 02 February 2026 03:33:17 +0000 (0:00:00.629) 0:07:31.790 ******* 2026-02-02 03:33:56.641535 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:33:56.641545 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:33:56.641553 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:33:56.641561 | orchestrator | 2026-02-02 03:33:56.641570 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-02 03:33:56.641579 | orchestrator | Monday 02 February 2026 03:33:18 +0000 (0:00:01.000) 0:07:32.790 ******* 2026-02-02 03:33:56.641589 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:33:56.641598 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:33:56.641607 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:33:56.641616 | orchestrator | 2026-02-02 03:33:56.641626 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-02 03:33:56.641635 | orchestrator | Monday 02 February 2026 03:33:21 +0000 (0:00:02.437) 0:07:35.228 ******* 2026-02-02 03:33:56.641645 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:33:56.641656 | orchestrator | 2026-02-02 03:33:56.641665 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-02 03:33:56.641675 | orchestrator | Monday 02 February 2026 03:33:21 +0000 (0:00:00.563) 0:07:35.791 ******* 2026-02-02 03:33:56.641685 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:33:56.641695 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:33:56.641704 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:33:56.641713 | orchestrator | 2026-02-02 03:33:56.641723 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-02 03:33:56.641733 | orchestrator | Monday 02 February 2026 03:33:23 +0000 (0:00:01.568) 0:07:37.359 ******* 2026-02-02 03:33:56.641767 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:33:56.641777 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:33:56.641786 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:33:56.641796 | orchestrator | 2026-02-02 03:33:56.641805 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-02 03:33:56.641813 | orchestrator | Monday 02 February 2026 03:33:24 +0000 (0:00:01.176) 0:07:38.536 ******* 2026-02-02 03:33:56.641874 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:33:56.641885 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:33:56.641895 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:33:56.641904 | orchestrator | 2026-02-02 03:33:56.641912 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-02 03:33:56.641921 | orchestrator | Monday 02 February 2026 03:33:26 +0000 (0:00:01.686) 0:07:40.223 ******* 2026-02-02 03:33:56.641931 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.641941 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:33:56.641951 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:33:56.641962 | orchestrator | 2026-02-02 03:33:56.641972 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-02 03:33:56.641983 | orchestrator | Monday 02 February 2026 03:33:26 +0000 (0:00:00.406) 0:07:40.630 ******* 2026-02-02 03:33:56.641994 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642004 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:33:56.642067 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:33:56.642080 | orchestrator | 2026-02-02 03:33:56.642089 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-02 03:33:56.642098 | orchestrator | Monday 02 February 2026 03:33:27 +0000 (0:00:00.887) 0:07:41.517 ******* 2026-02-02 03:33:56.642105 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 03:33:56.642128 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-02 03:33:56.642137 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-02 03:33:56.642145 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-02-02 03:33:56.642153 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-02 03:33:56.642161 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-02 03:33:56.642169 | orchestrator | 2026-02-02 03:33:56.642177 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-02 03:33:56.642185 | orchestrator | Monday 02 February 2026 03:33:28 +0000 (0:00:01.105) 0:07:42.622 ******* 2026-02-02 03:33:56.642194 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-02 03:33:56.642233 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-02 03:33:56.642242 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-02 03:33:56.642252 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-02 03:33:56.642260 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-02 03:33:56.642269 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-02 03:33:56.642278 | orchestrator | 2026-02-02 03:33:56.642287 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-02 03:33:56.642295 | orchestrator | Monday 02 February 2026 03:33:30 +0000 (0:00:02.257) 0:07:44.880 ******* 2026-02-02 03:33:56.642303 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-02 03:33:56.642311 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-02 03:33:56.642318 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-02 03:33:56.642326 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-02 03:33:56.642334 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-02 03:33:56.642344 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-02 03:33:56.642352 | orchestrator | 2026-02-02 03:33:56.642360 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-02 03:33:56.642368 | orchestrator | Monday 02 February 2026 03:33:34 +0000 (0:00:03.510) 0:07:48.390 ******* 2026-02-02 03:33:56.642376 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642385 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:33:56.642406 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:33:56.642412 | orchestrator | 2026-02-02 03:33:56.642417 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-02 03:33:56.642423 | orchestrator | Monday 02 February 2026 03:33:37 +0000 (0:00:02.929) 0:07:51.320 ******* 2026-02-02 03:33:56.642428 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642433 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:33:56.642455 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-02 03:33:56.642461 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:33:56.642467 | orchestrator | 2026-02-02 03:33:56.642472 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-02 03:33:56.642477 | orchestrator | Monday 02 February 2026 03:33:49 +0000 (0:00:12.646) 0:08:03.966 ******* 2026-02-02 03:33:56.642482 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642487 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:33:56.642492 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:33:56.642497 | orchestrator | 2026-02-02 03:33:56.642503 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 03:33:56.642508 | orchestrator | Monday 02 February 2026 03:33:51 +0000 (0:00:01.211) 0:08:05.178 ******* 2026-02-02 03:33:56.642513 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642518 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:33:56.642523 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:33:56.642528 | orchestrator | 2026-02-02 03:33:56.642533 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-02 03:33:56.642539 | orchestrator | Monday 02 February 2026 03:33:51 +0000 (0:00:00.402) 0:08:05.580 ******* 2026-02-02 03:33:56.642544 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:33:56.642549 | orchestrator | 2026-02-02 03:33:56.642555 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-02 03:33:56.642562 | orchestrator | Monday 02 February 2026 03:33:52 +0000 (0:00:00.879) 0:08:06.460 ******* 2026-02-02 03:33:56.642570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:33:56.642578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:33:56.642585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:33:56.642591 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642603 | orchestrator | 2026-02-02 03:33:56.642614 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-02 03:33:56.642621 | orchestrator | Monday 02 February 2026 03:33:52 +0000 (0:00:00.440) 0:08:06.901 ******* 2026-02-02 03:33:56.642629 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642637 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:33:56.642645 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:33:56.642652 | orchestrator | 2026-02-02 03:33:56.642661 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-02 03:33:56.642669 | orchestrator | Monday 02 February 2026 03:33:53 +0000 (0:00:00.364) 0:08:07.265 ******* 2026-02-02 03:33:56.642677 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642684 | orchestrator | 2026-02-02 03:33:56.642692 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-02 03:33:56.642700 | orchestrator | Monday 02 February 2026 03:33:53 +0000 (0:00:00.261) 0:08:07.526 ******* 2026-02-02 03:33:56.642707 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642715 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:33:56.642723 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:33:56.642732 | orchestrator | 2026-02-02 03:33:56.642740 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-02 03:33:56.642749 | orchestrator | Monday 02 February 2026 03:33:53 +0000 (0:00:00.368) 0:08:07.895 ******* 2026-02-02 03:33:56.642765 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642773 | orchestrator | 2026-02-02 03:33:56.642788 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-02 03:33:56.642798 | orchestrator | Monday 02 February 2026 03:33:53 +0000 (0:00:00.243) 0:08:08.139 ******* 2026-02-02 03:33:56.642804 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642809 | orchestrator | 2026-02-02 03:33:56.642814 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-02 03:33:56.642855 | orchestrator | Monday 02 February 2026 03:33:54 +0000 (0:00:00.231) 0:08:08.370 ******* 2026-02-02 03:33:56.642862 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642867 | orchestrator | 2026-02-02 03:33:56.642872 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-02 03:33:56.642877 | orchestrator | Monday 02 February 2026 03:33:54 +0000 (0:00:00.138) 0:08:08.508 ******* 2026-02-02 03:33:56.642882 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642887 | orchestrator | 2026-02-02 03:33:56.642892 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-02 03:33:56.642897 | orchestrator | Monday 02 February 2026 03:33:55 +0000 (0:00:00.926) 0:08:09.435 ******* 2026-02-02 03:33:56.642902 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642908 | orchestrator | 2026-02-02 03:33:56.642913 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-02 03:33:56.642918 | orchestrator | Monday 02 February 2026 03:33:55 +0000 (0:00:00.277) 0:08:09.713 ******* 2026-02-02 03:33:56.642923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:33:56.642928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:33:56.642933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:33:56.642938 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642943 | orchestrator | 2026-02-02 03:33:56.642948 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-02 03:33:56.642953 | orchestrator | Monday 02 February 2026 03:33:56 +0000 (0:00:00.432) 0:08:10.145 ******* 2026-02-02 03:33:56.642958 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642963 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:33:56.642968 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:33:56.642973 | orchestrator | 2026-02-02 03:33:56.642979 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-02 03:33:56.642984 | orchestrator | Monday 02 February 2026 03:33:56 +0000 (0:00:00.376) 0:08:10.522 ******* 2026-02-02 03:33:56.642989 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:33:56.642994 | orchestrator | 2026-02-02 03:33:56.642999 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-02 03:33:56.643011 | orchestrator | Monday 02 February 2026 03:33:56 +0000 (0:00:00.251) 0:08:10.773 ******* 2026-02-02 03:34:22.448108 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:22.448263 | orchestrator | 2026-02-02 03:34:22.448294 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-02 03:34:22.448317 | orchestrator | 2026-02-02 03:34:22.448338 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 03:34:22.448360 | orchestrator | Monday 02 February 2026 03:33:57 +0000 (0:00:01.109) 0:08:11.883 ******* 2026-02-02 03:34:22.448381 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:34:22.448404 | orchestrator | 2026-02-02 03:34:22.448423 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 03:34:22.448443 | orchestrator | Monday 02 February 2026 03:33:59 +0000 (0:00:01.332) 0:08:13.215 ******* 2026-02-02 03:34:22.448463 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:34:22.448520 | orchestrator | 2026-02-02 03:34:22.448545 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 03:34:22.448568 | orchestrator | Monday 02 February 2026 03:34:00 +0000 (0:00:01.386) 0:08:14.602 ******* 2026-02-02 03:34:22.448589 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:22.448609 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:22.448627 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:22.448647 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:34:22.448669 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:34:22.448688 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:34:22.448707 | orchestrator | 2026-02-02 03:34:22.448727 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 03:34:22.448748 | orchestrator | Monday 02 February 2026 03:34:01 +0000 (0:00:01.054) 0:08:15.656 ******* 2026-02-02 03:34:22.448767 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:34:22.448785 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:22.448803 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:22.448821 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:22.448839 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:34:22.448857 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:34:22.448876 | orchestrator | 2026-02-02 03:34:22.448894 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 03:34:22.448913 | orchestrator | Monday 02 February 2026 03:34:02 +0000 (0:00:00.997) 0:08:16.654 ******* 2026-02-02 03:34:22.448931 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:34:22.448981 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:22.448999 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:34:22.449017 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:22.449052 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:34:22.449070 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:22.449087 | orchestrator | 2026-02-02 03:34:22.449120 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 03:34:22.449138 | orchestrator | Monday 02 February 2026 03:34:03 +0000 (0:00:00.745) 0:08:17.399 ******* 2026-02-02 03:34:22.449156 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:34:22.449175 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:22.449192 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:22.449211 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:22.449230 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:34:22.449249 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:34:22.449268 | orchestrator | 2026-02-02 03:34:22.449310 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 03:34:22.449329 | orchestrator | Monday 02 February 2026 03:34:04 +0000 (0:00:00.945) 0:08:18.345 ******* 2026-02-02 03:34:22.449348 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:22.449366 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:22.449384 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:22.449403 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:34:22.449421 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:34:22.449439 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:34:22.449457 | orchestrator | 2026-02-02 03:34:22.449476 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 03:34:22.449495 | orchestrator | Monday 02 February 2026 03:34:05 +0000 (0:00:01.043) 0:08:19.389 ******* 2026-02-02 03:34:22.449513 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:22.449532 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:22.449551 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:22.449569 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:34:22.449588 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:34:22.449606 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:34:22.449625 | orchestrator | 2026-02-02 03:34:22.449643 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 03:34:22.449661 | orchestrator | Monday 02 February 2026 03:34:06 +0000 (0:00:00.899) 0:08:20.288 ******* 2026-02-02 03:34:22.449680 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:22.449717 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:22.449735 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:22.449753 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:34:22.449771 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:34:22.449790 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:34:22.449809 | orchestrator | 2026-02-02 03:34:22.449827 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 03:34:22.449845 | orchestrator | Monday 02 February 2026 03:34:06 +0000 (0:00:00.680) 0:08:20.969 ******* 2026-02-02 03:34:22.449864 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:22.449882 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:22.449901 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:22.449919 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:34:22.449975 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:34:22.449994 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:34:22.450201 | orchestrator | 2026-02-02 03:34:22.450229 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 03:34:22.450249 | orchestrator | Monday 02 February 2026 03:34:08 +0000 (0:00:01.386) 0:08:22.355 ******* 2026-02-02 03:34:22.450267 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:22.450285 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:22.450302 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:22.450353 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:34:22.450372 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:34:22.450391 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:34:22.450410 | orchestrator | 2026-02-02 03:34:22.450428 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 03:34:22.450447 | orchestrator | Monday 02 February 2026 03:34:09 +0000 (0:00:01.073) 0:08:23.428 ******* 2026-02-02 03:34:22.450466 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:22.450486 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:22.450505 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:22.450522 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:34:22.450539 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:34:22.450555 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:34:22.450572 | orchestrator | 2026-02-02 03:34:22.450589 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 03:34:22.450605 | orchestrator | Monday 02 February 2026 03:34:10 +0000 (0:00:00.939) 0:08:24.367 ******* 2026-02-02 03:34:22.450622 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:22.450639 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:22.450655 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:22.450671 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:34:22.450688 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:34:22.450704 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:34:22.450720 | orchestrator | 2026-02-02 03:34:22.450737 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 03:34:22.450753 | orchestrator | Monday 02 February 2026 03:34:10 +0000 (0:00:00.676) 0:08:25.044 ******* 2026-02-02 03:34:22.450769 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:22.450785 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:22.450801 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:22.450818 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:34:22.450834 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:34:22.450851 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:34:22.450867 | orchestrator | 2026-02-02 03:34:22.450884 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 03:34:22.450901 | orchestrator | Monday 02 February 2026 03:34:11 +0000 (0:00:00.979) 0:08:26.024 ******* 2026-02-02 03:34:22.450917 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:22.451039 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:22.451064 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:22.451081 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:34:22.451097 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:34:22.451132 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:34:22.451149 | orchestrator | 2026-02-02 03:34:22.451166 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 03:34:22.451183 | orchestrator | Monday 02 February 2026 03:34:12 +0000 (0:00:00.744) 0:08:26.768 ******* 2026-02-02 03:34:22.451199 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:22.451215 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:22.451230 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:22.451246 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:34:22.451262 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:34:22.451277 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:34:22.451293 | orchestrator | 2026-02-02 03:34:22.451309 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 03:34:22.451326 | orchestrator | Monday 02 February 2026 03:34:13 +0000 (0:00:01.014) 0:08:27.783 ******* 2026-02-02 03:34:22.451342 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:22.451358 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:22.451373 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:22.451388 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:34:22.451403 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:34:22.451419 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:34:22.451434 | orchestrator | 2026-02-02 03:34:22.451450 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 03:34:22.451465 | orchestrator | Monday 02 February 2026 03:34:14 +0000 (0:00:00.720) 0:08:28.504 ******* 2026-02-02 03:34:22.451482 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:22.451498 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:22.451513 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:22.451530 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:34:22.451545 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:34:22.451561 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:34:22.451577 | orchestrator | 2026-02-02 03:34:22.451593 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 03:34:22.451609 | orchestrator | Monday 02 February 2026 03:34:15 +0000 (0:00:00.987) 0:08:29.491 ******* 2026-02-02 03:34:22.451625 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:22.451640 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:22.451656 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:22.451672 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:34:22.451688 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:34:22.451703 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:34:22.451718 | orchestrator | 2026-02-02 03:34:22.451733 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 03:34:22.451750 | orchestrator | Monday 02 February 2026 03:34:16 +0000 (0:00:00.737) 0:08:30.229 ******* 2026-02-02 03:34:22.451767 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:22.451783 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:22.451799 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:22.451815 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:34:22.451832 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:34:22.451848 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:34:22.451864 | orchestrator | 2026-02-02 03:34:22.451879 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 03:34:22.451895 | orchestrator | Monday 02 February 2026 03:34:17 +0000 (0:00:01.021) 0:08:31.251 ******* 2026-02-02 03:34:22.451910 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:22.452023 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:22.452047 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:22.452069 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:34:22.452088 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:34:22.452110 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:34:22.452129 | orchestrator | 2026-02-02 03:34:22.452150 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-02 03:34:22.452171 | orchestrator | Monday 02 February 2026 03:34:18 +0000 (0:00:01.427) 0:08:32.679 ******* 2026-02-02 03:34:22.452231 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:34:22.452252 | orchestrator | 2026-02-02 03:34:22.452295 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-02 03:34:50.130327 | orchestrator | Monday 02 February 2026 03:34:22 +0000 (0:00:03.894) 0:08:36.574 ******* 2026-02-02 03:34:50.130407 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:34:50.130414 | orchestrator | 2026-02-02 03:34:50.130419 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-02 03:34:50.130424 | orchestrator | Monday 02 February 2026 03:34:24 +0000 (0:00:01.957) 0:08:38.531 ******* 2026-02-02 03:34:50.130431 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:34:50.130437 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:34:50.130444 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:34:50.130450 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:34:50.130457 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:34:50.130463 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:34:50.130470 | orchestrator | 2026-02-02 03:34:50.130476 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-02 03:34:50.130481 | orchestrator | Monday 02 February 2026 03:34:26 +0000 (0:00:01.772) 0:08:40.304 ******* 2026-02-02 03:34:50.130485 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:34:50.130489 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:34:50.130493 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:34:50.130497 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:34:50.130501 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:34:50.130505 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:34:50.130509 | orchestrator | 2026-02-02 03:34:50.130513 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-02 03:34:50.130517 | orchestrator | Monday 02 February 2026 03:34:27 +0000 (0:00:01.061) 0:08:41.365 ******* 2026-02-02 03:34:50.130522 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:34:50.130527 | orchestrator | 2026-02-02 03:34:50.130531 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-02 03:34:50.130534 | orchestrator | Monday 02 February 2026 03:34:28 +0000 (0:00:01.472) 0:08:42.837 ******* 2026-02-02 03:34:50.130538 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:34:50.130542 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:34:50.130546 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:34:50.130550 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:34:50.130554 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:34:50.130557 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:34:50.130561 | orchestrator | 2026-02-02 03:34:50.130565 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-02 03:34:50.130569 | orchestrator | Monday 02 February 2026 03:34:30 +0000 (0:00:01.806) 0:08:44.644 ******* 2026-02-02 03:34:50.130573 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:34:50.130577 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:34:50.130580 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:34:50.130584 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:34:50.130588 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:34:50.130592 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:34:50.130596 | orchestrator | 2026-02-02 03:34:50.130599 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-02 03:34:50.130604 | orchestrator | Monday 02 February 2026 03:34:33 +0000 (0:00:03.471) 0:08:48.116 ******* 2026-02-02 03:34:50.130619 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:34:50.130623 | orchestrator | 2026-02-02 03:34:50.130627 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-02 03:34:50.130647 | orchestrator | Monday 02 February 2026 03:34:35 +0000 (0:00:01.565) 0:08:49.681 ******* 2026-02-02 03:34:50.130651 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:50.130655 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:50.130659 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:50.130663 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:34:50.130667 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:34:50.130670 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:34:50.130674 | orchestrator | 2026-02-02 03:34:50.130678 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-02 03:34:50.130682 | orchestrator | Monday 02 February 2026 03:34:36 +0000 (0:00:01.071) 0:08:50.753 ******* 2026-02-02 03:34:50.130685 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:34:50.130689 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:34:50.130693 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:34:50.130697 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:34:50.130701 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:34:50.130704 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:34:50.130708 | orchestrator | 2026-02-02 03:34:50.130712 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-02 03:34:50.130716 | orchestrator | Monday 02 February 2026 03:34:39 +0000 (0:00:02.456) 0:08:53.210 ******* 2026-02-02 03:34:50.130720 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:50.130723 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:50.130727 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:50.130731 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:34:50.130735 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:34:50.130738 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:34:50.130742 | orchestrator | 2026-02-02 03:34:50.130746 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-02 03:34:50.130750 | orchestrator | 2026-02-02 03:34:50.130754 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 03:34:50.130758 | orchestrator | Monday 02 February 2026 03:34:40 +0000 (0:00:01.256) 0:08:54.466 ******* 2026-02-02 03:34:50.130763 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:34:50.130767 | orchestrator | 2026-02-02 03:34:50.130771 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 03:34:50.130774 | orchestrator | Monday 02 February 2026 03:34:41 +0000 (0:00:00.849) 0:08:55.316 ******* 2026-02-02 03:34:50.130790 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:34:50.130794 | orchestrator | 2026-02-02 03:34:50.130798 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 03:34:50.130802 | orchestrator | Monday 02 February 2026 03:34:41 +0000 (0:00:00.580) 0:08:55.896 ******* 2026-02-02 03:34:50.130805 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:50.130809 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:50.130813 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:50.130817 | orchestrator | 2026-02-02 03:34:50.130820 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 03:34:50.130824 | orchestrator | Monday 02 February 2026 03:34:42 +0000 (0:00:00.358) 0:08:56.255 ******* 2026-02-02 03:34:50.130828 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:50.130831 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:50.130835 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:50.130839 | orchestrator | 2026-02-02 03:34:50.130843 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 03:34:50.130846 | orchestrator | Monday 02 February 2026 03:34:43 +0000 (0:00:01.012) 0:08:57.267 ******* 2026-02-02 03:34:50.130850 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:50.130854 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:50.130857 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:50.130861 | orchestrator | 2026-02-02 03:34:50.130865 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 03:34:50.130873 | orchestrator | Monday 02 February 2026 03:34:43 +0000 (0:00:00.755) 0:08:58.023 ******* 2026-02-02 03:34:50.130877 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:50.130880 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:50.130884 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:50.130888 | orchestrator | 2026-02-02 03:34:50.130892 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 03:34:50.130896 | orchestrator | Monday 02 February 2026 03:34:44 +0000 (0:00:00.741) 0:08:58.764 ******* 2026-02-02 03:34:50.130900 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:50.130905 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:50.130909 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:50.130914 | orchestrator | 2026-02-02 03:34:50.130918 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 03:34:50.130922 | orchestrator | Monday 02 February 2026 03:34:44 +0000 (0:00:00.331) 0:08:59.096 ******* 2026-02-02 03:34:50.130926 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:50.130930 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:50.130935 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:50.130939 | orchestrator | 2026-02-02 03:34:50.130943 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 03:34:50.130947 | orchestrator | Monday 02 February 2026 03:34:45 +0000 (0:00:00.625) 0:08:59.722 ******* 2026-02-02 03:34:50.130952 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:50.130956 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:50.130960 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:50.130964 | orchestrator | 2026-02-02 03:34:50.130969 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 03:34:50.130973 | orchestrator | Monday 02 February 2026 03:34:45 +0000 (0:00:00.350) 0:09:00.073 ******* 2026-02-02 03:34:50.130977 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:50.130981 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:50.130986 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:50.130990 | orchestrator | 2026-02-02 03:34:50.130994 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 03:34:50.131001 | orchestrator | Monday 02 February 2026 03:34:46 +0000 (0:00:00.728) 0:09:00.802 ******* 2026-02-02 03:34:50.131005 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:50.131010 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:50.131014 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:50.131018 | orchestrator | 2026-02-02 03:34:50.131022 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 03:34:50.131026 | orchestrator | Monday 02 February 2026 03:34:47 +0000 (0:00:00.729) 0:09:01.531 ******* 2026-02-02 03:34:50.131031 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:50.131035 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:50.131039 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:50.131044 | orchestrator | 2026-02-02 03:34:50.131094 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 03:34:50.131098 | orchestrator | Monday 02 February 2026 03:34:48 +0000 (0:00:00.624) 0:09:02.155 ******* 2026-02-02 03:34:50.131103 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:50.131108 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:50.131112 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:50.131116 | orchestrator | 2026-02-02 03:34:50.131121 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 03:34:50.131125 | orchestrator | Monday 02 February 2026 03:34:48 +0000 (0:00:00.339) 0:09:02.495 ******* 2026-02-02 03:34:50.131129 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:50.131133 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:50.131138 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:50.131142 | orchestrator | 2026-02-02 03:34:50.131146 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 03:34:50.131151 | orchestrator | Monday 02 February 2026 03:34:48 +0000 (0:00:00.375) 0:09:02.871 ******* 2026-02-02 03:34:50.131159 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:50.131163 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:50.131167 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:50.131172 | orchestrator | 2026-02-02 03:34:50.131176 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 03:34:50.131181 | orchestrator | Monday 02 February 2026 03:34:49 +0000 (0:00:00.355) 0:09:03.226 ******* 2026-02-02 03:34:50.131185 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:34:50.131189 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:34:50.131193 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:34:50.131198 | orchestrator | 2026-02-02 03:34:50.131202 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 03:34:50.131206 | orchestrator | Monday 02 February 2026 03:34:49 +0000 (0:00:00.677) 0:09:03.904 ******* 2026-02-02 03:34:50.131210 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:34:50.131215 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:34:50.131219 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:34:50.131223 | orchestrator | 2026-02-02 03:34:50.131231 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 03:35:25.657471 | orchestrator | Monday 02 February 2026 03:34:50 +0000 (0:00:00.358) 0:09:04.263 ******* 2026-02-02 03:35:25.657596 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:25.657613 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:25.657624 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:25.657635 | orchestrator | 2026-02-02 03:35:25.657680 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 03:35:25.657695 | orchestrator | Monday 02 February 2026 03:34:50 +0000 (0:00:00.374) 0:09:04.637 ******* 2026-02-02 03:35:25.657703 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:25.657709 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:25.657716 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:25.657723 | orchestrator | 2026-02-02 03:35:25.657730 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 03:35:25.657737 | orchestrator | Monday 02 February 2026 03:34:50 +0000 (0:00:00.341) 0:09:04.979 ******* 2026-02-02 03:35:25.657743 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:25.657750 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:25.657757 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:25.657763 | orchestrator | 2026-02-02 03:35:25.657769 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 03:35:25.657776 | orchestrator | Monday 02 February 2026 03:34:51 +0000 (0:00:00.723) 0:09:05.702 ******* 2026-02-02 03:35:25.657782 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:25.657788 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:25.657794 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:25.657801 | orchestrator | 2026-02-02 03:35:25.657807 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-02 03:35:25.657813 | orchestrator | Monday 02 February 2026 03:34:52 +0000 (0:00:00.664) 0:09:06.366 ******* 2026-02-02 03:35:25.657819 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:25.657826 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:25.657832 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-02 03:35:25.657839 | orchestrator | 2026-02-02 03:35:25.657845 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-02 03:35:25.657852 | orchestrator | Monday 02 February 2026 03:34:52 +0000 (0:00:00.430) 0:09:06.797 ******* 2026-02-02 03:35:25.657858 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:35:25.657864 | orchestrator | 2026-02-02 03:35:25.657871 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-02 03:35:25.657877 | orchestrator | Monday 02 February 2026 03:34:55 +0000 (0:00:02.511) 0:09:09.308 ******* 2026-02-02 03:35:25.657885 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-02 03:35:25.657914 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:25.657921 | orchestrator | 2026-02-02 03:35:25.657928 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-02 03:35:25.657934 | orchestrator | Monday 02 February 2026 03:34:55 +0000 (0:00:00.271) 0:09:09.580 ******* 2026-02-02 03:35:25.657954 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 03:35:25.657968 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 03:35:25.657975 | orchestrator | 2026-02-02 03:35:25.657981 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-02 03:35:25.657987 | orchestrator | Monday 02 February 2026 03:35:02 +0000 (0:00:07.181) 0:09:16.761 ******* 2026-02-02 03:35:25.657993 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:35:25.658000 | orchestrator | 2026-02-02 03:35:25.658007 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-02 03:35:25.658049 | orchestrator | Monday 02 February 2026 03:35:06 +0000 (0:00:03.412) 0:09:20.174 ******* 2026-02-02 03:35:25.658058 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:35:25.658066 | orchestrator | 2026-02-02 03:35:25.658073 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-02 03:35:25.658080 | orchestrator | Monday 02 February 2026 03:35:06 +0000 (0:00:00.577) 0:09:20.751 ******* 2026-02-02 03:35:25.658087 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-02 03:35:25.658094 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-02 03:35:25.658101 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-02 03:35:25.658108 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-02 03:35:25.658115 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-02 03:35:25.658123 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-02 03:35:25.658129 | orchestrator | 2026-02-02 03:35:25.658136 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-02 03:35:25.658143 | orchestrator | Monday 02 February 2026 03:35:07 +0000 (0:00:01.382) 0:09:22.134 ******* 2026-02-02 03:35:25.658166 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:35:25.658177 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 03:35:25.658251 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 03:35:25.658261 | orchestrator | 2026-02-02 03:35:25.658272 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-02 03:35:25.658282 | orchestrator | Monday 02 February 2026 03:35:10 +0000 (0:00:02.010) 0:09:24.144 ******* 2026-02-02 03:35:25.658292 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 03:35:25.658304 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 03:35:25.658315 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:35:25.658327 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 03:35:25.658338 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-02 03:35:25.658350 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:35:25.658361 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 03:35:25.658371 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-02 03:35:25.658398 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:35:25.658409 | orchestrator | 2026-02-02 03:35:25.658418 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-02 03:35:25.658428 | orchestrator | Monday 02 February 2026 03:35:11 +0000 (0:00:01.235) 0:09:25.380 ******* 2026-02-02 03:35:25.658438 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:35:25.658447 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:35:25.658456 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:35:25.658465 | orchestrator | 2026-02-02 03:35:25.658475 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-02 03:35:25.658486 | orchestrator | Monday 02 February 2026 03:35:13 +0000 (0:00:02.589) 0:09:27.970 ******* 2026-02-02 03:35:25.658496 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:25.658506 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:25.658516 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:25.658527 | orchestrator | 2026-02-02 03:35:25.658534 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-02 03:35:25.658541 | orchestrator | Monday 02 February 2026 03:35:14 +0000 (0:00:00.658) 0:09:28.628 ******* 2026-02-02 03:35:25.658547 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:35:25.658553 | orchestrator | 2026-02-02 03:35:25.658560 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-02 03:35:25.658566 | orchestrator | Monday 02 February 2026 03:35:15 +0000 (0:00:00.625) 0:09:29.254 ******* 2026-02-02 03:35:25.658572 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:35:25.658578 | orchestrator | 2026-02-02 03:35:25.658585 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-02 03:35:25.658591 | orchestrator | Monday 02 February 2026 03:35:16 +0000 (0:00:00.890) 0:09:30.144 ******* 2026-02-02 03:35:25.658598 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:35:25.658604 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:35:25.658610 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:35:25.658616 | orchestrator | 2026-02-02 03:35:25.658623 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-02 03:35:25.658635 | orchestrator | Monday 02 February 2026 03:35:17 +0000 (0:00:01.199) 0:09:31.343 ******* 2026-02-02 03:35:25.658641 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:35:25.658647 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:35:25.658654 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:35:25.658660 | orchestrator | 2026-02-02 03:35:25.658666 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-02 03:35:25.658673 | orchestrator | Monday 02 February 2026 03:35:18 +0000 (0:00:01.105) 0:09:32.449 ******* 2026-02-02 03:35:25.658679 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:35:25.658685 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:35:25.658691 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:35:25.658697 | orchestrator | 2026-02-02 03:35:25.658703 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-02 03:35:25.658709 | orchestrator | Monday 02 February 2026 03:35:19 +0000 (0:00:01.661) 0:09:34.111 ******* 2026-02-02 03:35:25.658716 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:35:25.658722 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:35:25.658728 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:35:25.658734 | orchestrator | 2026-02-02 03:35:25.658741 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-02 03:35:25.658747 | orchestrator | Monday 02 February 2026 03:35:22 +0000 (0:00:02.329) 0:09:36.441 ******* 2026-02-02 03:35:25.658753 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:25.658759 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:25.658766 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:25.658772 | orchestrator | 2026-02-02 03:35:25.658778 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 03:35:25.658791 | orchestrator | Monday 02 February 2026 03:35:23 +0000 (0:00:01.323) 0:09:37.764 ******* 2026-02-02 03:35:25.658797 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:35:25.658803 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:35:25.658809 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:35:25.658816 | orchestrator | 2026-02-02 03:35:25.658822 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-02 03:35:25.658828 | orchestrator | Monday 02 February 2026 03:35:24 +0000 (0:00:01.067) 0:09:38.831 ******* 2026-02-02 03:35:25.658834 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:35:25.658841 | orchestrator | 2026-02-02 03:35:25.658847 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-02 03:35:25.658853 | orchestrator | Monday 02 February 2026 03:35:25 +0000 (0:00:00.613) 0:09:39.445 ******* 2026-02-02 03:35:25.658859 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:25.658865 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:25.658871 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:25.658878 | orchestrator | 2026-02-02 03:35:25.658884 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-02 03:35:25.658899 | orchestrator | Monday 02 February 2026 03:35:25 +0000 (0:00:00.339) 0:09:39.784 ******* 2026-02-02 03:35:45.676627 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:35:45.676743 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:35:45.676760 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:35:45.676772 | orchestrator | 2026-02-02 03:35:45.676784 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-02 03:35:45.676797 | orchestrator | Monday 02 February 2026 03:35:27 +0000 (0:00:01.540) 0:09:41.324 ******* 2026-02-02 03:35:45.676809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:35:45.676821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:35:45.676833 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:35:45.676843 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:45.676855 | orchestrator | 2026-02-02 03:35:45.676866 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-02 03:35:45.676877 | orchestrator | Monday 02 February 2026 03:35:27 +0000 (0:00:00.719) 0:09:42.043 ******* 2026-02-02 03:35:45.676888 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:45.676901 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:45.676912 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:45.676923 | orchestrator | 2026-02-02 03:35:45.676934 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-02 03:35:45.676945 | orchestrator | 2026-02-02 03:35:45.676956 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 03:35:45.676967 | orchestrator | Monday 02 February 2026 03:35:28 +0000 (0:00:00.597) 0:09:42.641 ******* 2026-02-02 03:35:45.676979 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:35:45.676992 | orchestrator | 2026-02-02 03:35:45.677003 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 03:35:45.677014 | orchestrator | Monday 02 February 2026 03:35:29 +0000 (0:00:00.875) 0:09:43.516 ******* 2026-02-02 03:35:45.677026 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:35:45.677037 | orchestrator | 2026-02-02 03:35:45.677048 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 03:35:45.677060 | orchestrator | Monday 02 February 2026 03:35:30 +0000 (0:00:00.685) 0:09:44.202 ******* 2026-02-02 03:35:45.677071 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:45.677082 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:45.677093 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:45.677129 | orchestrator | 2026-02-02 03:35:45.677141 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 03:35:45.677152 | orchestrator | Monday 02 February 2026 03:35:30 +0000 (0:00:00.668) 0:09:44.871 ******* 2026-02-02 03:35:45.677164 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:45.677177 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:45.677190 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:45.677202 | orchestrator | 2026-02-02 03:35:45.677214 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 03:35:45.677227 | orchestrator | Monday 02 February 2026 03:35:31 +0000 (0:00:00.789) 0:09:45.661 ******* 2026-02-02 03:35:45.677239 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:45.677317 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:45.677332 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:45.677344 | orchestrator | 2026-02-02 03:35:45.677356 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 03:35:45.677369 | orchestrator | Monday 02 February 2026 03:35:32 +0000 (0:00:00.781) 0:09:46.442 ******* 2026-02-02 03:35:45.677380 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:45.677390 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:45.677401 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:45.677412 | orchestrator | 2026-02-02 03:35:45.677423 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 03:35:45.677434 | orchestrator | Monday 02 February 2026 03:35:33 +0000 (0:00:00.750) 0:09:47.193 ******* 2026-02-02 03:35:45.677445 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:45.677456 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:45.677466 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:45.677477 | orchestrator | 2026-02-02 03:35:45.677488 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 03:35:45.677511 | orchestrator | Monday 02 February 2026 03:35:33 +0000 (0:00:00.639) 0:09:47.832 ******* 2026-02-02 03:35:45.677531 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:45.677543 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:45.677553 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:45.677564 | orchestrator | 2026-02-02 03:35:45.677575 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 03:35:45.677586 | orchestrator | Monday 02 February 2026 03:35:34 +0000 (0:00:00.373) 0:09:48.206 ******* 2026-02-02 03:35:45.677597 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:45.677608 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:45.677618 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:45.677629 | orchestrator | 2026-02-02 03:35:45.677640 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 03:35:45.677651 | orchestrator | Monday 02 February 2026 03:35:34 +0000 (0:00:00.342) 0:09:48.549 ******* 2026-02-02 03:35:45.677662 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:45.677672 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:45.677683 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:45.677694 | orchestrator | 2026-02-02 03:35:45.677705 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 03:35:45.677715 | orchestrator | Monday 02 February 2026 03:35:35 +0000 (0:00:00.726) 0:09:49.275 ******* 2026-02-02 03:35:45.677726 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:45.677737 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:45.677748 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:45.677758 | orchestrator | 2026-02-02 03:35:45.677769 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 03:35:45.677780 | orchestrator | Monday 02 February 2026 03:35:36 +0000 (0:00:01.095) 0:09:50.371 ******* 2026-02-02 03:35:45.677791 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:45.677802 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:45.677830 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:45.677842 | orchestrator | 2026-02-02 03:35:45.677853 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 03:35:45.677874 | orchestrator | Monday 02 February 2026 03:35:36 +0000 (0:00:00.359) 0:09:50.731 ******* 2026-02-02 03:35:45.677893 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:45.677913 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:45.677931 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:45.677948 | orchestrator | 2026-02-02 03:35:45.677960 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 03:35:45.677971 | orchestrator | Monday 02 February 2026 03:35:36 +0000 (0:00:00.340) 0:09:51.071 ******* 2026-02-02 03:35:45.677982 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:45.677993 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:45.678004 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:45.678105 | orchestrator | 2026-02-02 03:35:45.678121 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 03:35:45.678132 | orchestrator | Monday 02 February 2026 03:35:37 +0000 (0:00:00.417) 0:09:51.488 ******* 2026-02-02 03:35:45.678143 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:45.678154 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:45.678165 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:45.678176 | orchestrator | 2026-02-02 03:35:45.678187 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 03:35:45.678198 | orchestrator | Monday 02 February 2026 03:35:38 +0000 (0:00:00.734) 0:09:52.223 ******* 2026-02-02 03:35:45.678209 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:45.678219 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:45.678230 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:45.678241 | orchestrator | 2026-02-02 03:35:45.678252 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 03:35:45.678284 | orchestrator | Monday 02 February 2026 03:35:38 +0000 (0:00:00.383) 0:09:52.606 ******* 2026-02-02 03:35:45.678296 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:45.678307 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:45.678318 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:45.678329 | orchestrator | 2026-02-02 03:35:45.678340 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 03:35:45.678351 | orchestrator | Monday 02 February 2026 03:35:38 +0000 (0:00:00.326) 0:09:52.933 ******* 2026-02-02 03:35:45.678362 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:45.678374 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:45.678385 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:45.678396 | orchestrator | 2026-02-02 03:35:45.678407 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 03:35:45.678418 | orchestrator | Monday 02 February 2026 03:35:39 +0000 (0:00:00.372) 0:09:53.306 ******* 2026-02-02 03:35:45.678429 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:45.678440 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:45.678451 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:45.678462 | orchestrator | 2026-02-02 03:35:45.678474 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 03:35:45.678485 | orchestrator | Monday 02 February 2026 03:35:39 +0000 (0:00:00.632) 0:09:53.938 ******* 2026-02-02 03:35:45.678495 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:45.678506 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:45.678517 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:45.678528 | orchestrator | 2026-02-02 03:35:45.678547 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 03:35:45.678559 | orchestrator | Monday 02 February 2026 03:35:40 +0000 (0:00:00.406) 0:09:54.344 ******* 2026-02-02 03:35:45.678570 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:35:45.678581 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:35:45.678592 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:35:45.678602 | orchestrator | 2026-02-02 03:35:45.678613 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-02 03:35:45.678625 | orchestrator | Monday 02 February 2026 03:35:40 +0000 (0:00:00.582) 0:09:54.927 ******* 2026-02-02 03:35:45.678645 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:35:45.678656 | orchestrator | 2026-02-02 03:35:45.678668 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-02 03:35:45.678679 | orchestrator | Monday 02 February 2026 03:35:41 +0000 (0:00:00.894) 0:09:55.821 ******* 2026-02-02 03:35:45.678690 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:35:45.678701 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 03:35:45.678712 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 03:35:45.678723 | orchestrator | 2026-02-02 03:35:45.678734 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-02 03:35:45.678745 | orchestrator | Monday 02 February 2026 03:35:43 +0000 (0:00:02.089) 0:09:57.911 ******* 2026-02-02 03:35:45.678757 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 03:35:45.678768 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 03:35:45.678779 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:35:45.678790 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 03:35:45.678801 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-02 03:35:45.678812 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:35:45.678823 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 03:35:45.678834 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-02 03:35:45.678845 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:35:45.678856 | orchestrator | 2026-02-02 03:35:45.678867 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-02 03:35:45.678879 | orchestrator | Monday 02 February 2026 03:35:45 +0000 (0:00:01.232) 0:09:59.143 ******* 2026-02-02 03:35:45.678889 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:35:45.678901 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:35:45.678911 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:35:45.678923 | orchestrator | 2026-02-02 03:35:45.678934 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-02 03:35:45.678955 | orchestrator | Monday 02 February 2026 03:35:45 +0000 (0:00:00.666) 0:09:59.809 ******* 2026-02-02 03:36:35.078573 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:36:35.078660 | orchestrator | 2026-02-02 03:36:35.078669 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-02 03:36:35.078675 | orchestrator | Monday 02 February 2026 03:35:46 +0000 (0:00:00.626) 0:10:00.436 ******* 2026-02-02 03:36:35.078680 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 03:36:35.078686 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 03:36:35.078690 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 03:36:35.078694 | orchestrator | 2026-02-02 03:36:35.078698 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-02 03:36:35.078702 | orchestrator | Monday 02 February 2026 03:35:47 +0000 (0:00:00.860) 0:10:01.297 ******* 2026-02-02 03:36:35.078706 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:36:35.078711 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-02 03:36:35.078715 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:36:35.078719 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-02 03:36:35.078741 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:36:35.078745 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-02 03:36:35.078749 | orchestrator | 2026-02-02 03:36:35.078752 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-02 03:36:35.078756 | orchestrator | Monday 02 February 2026 03:35:51 +0000 (0:00:04.804) 0:10:06.101 ******* 2026-02-02 03:36:35.078760 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:36:35.078764 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 03:36:35.078768 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:36:35.078772 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 03:36:35.078776 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:36:35.078791 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 03:36:35.078795 | orchestrator | 2026-02-02 03:36:35.078798 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-02 03:36:35.078802 | orchestrator | Monday 02 February 2026 03:35:54 +0000 (0:00:02.214) 0:10:08.316 ******* 2026-02-02 03:36:35.078807 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 03:36:35.078811 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:36:35.078815 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 03:36:35.078819 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:36:35.078823 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 03:36:35.078827 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:36:35.078831 | orchestrator | 2026-02-02 03:36:35.078834 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-02 03:36:35.078838 | orchestrator | Monday 02 February 2026 03:35:55 +0000 (0:00:01.249) 0:10:09.565 ******* 2026-02-02 03:36:35.078842 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-02 03:36:35.078846 | orchestrator | 2026-02-02 03:36:35.078850 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-02 03:36:35.078854 | orchestrator | Monday 02 February 2026 03:35:55 +0000 (0:00:00.251) 0:10:09.816 ******* 2026-02-02 03:36:35.078858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 03:36:35.078862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 03:36:35.078866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 03:36:35.078870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 03:36:35.078874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 03:36:35.078878 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:36:35.078889 | orchestrator | 2026-02-02 03:36:35.078893 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-02 03:36:35.078897 | orchestrator | Monday 02 February 2026 03:35:56 +0000 (0:00:00.948) 0:10:10.764 ******* 2026-02-02 03:36:35.078910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 03:36:35.078915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 03:36:35.078918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 03:36:35.078926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 03:36:35.078930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 03:36:35.078934 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:36:35.078938 | orchestrator | 2026-02-02 03:36:35.078942 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-02 03:36:35.078945 | orchestrator | Monday 02 February 2026 03:35:57 +0000 (0:00:00.963) 0:10:11.728 ******* 2026-02-02 03:36:35.078949 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 03:36:35.078953 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 03:36:35.078957 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 03:36:35.078961 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 03:36:35.078965 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 03:36:35.078968 | orchestrator | 2026-02-02 03:36:35.078972 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-02 03:36:35.078976 | orchestrator | Monday 02 February 2026 03:36:25 +0000 (0:00:27.849) 0:10:39.577 ******* 2026-02-02 03:36:35.078980 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:36:35.078984 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:36:35.078988 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:36:35.078992 | orchestrator | 2026-02-02 03:36:35.078995 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-02 03:36:35.078999 | orchestrator | Monday 02 February 2026 03:36:26 +0000 (0:00:00.697) 0:10:40.275 ******* 2026-02-02 03:36:35.079003 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:36:35.079007 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:36:35.079011 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:36:35.079014 | orchestrator | 2026-02-02 03:36:35.079021 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-02 03:36:35.079025 | orchestrator | Monday 02 February 2026 03:36:26 +0000 (0:00:00.350) 0:10:40.625 ******* 2026-02-02 03:36:35.079029 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:36:35.079033 | orchestrator | 2026-02-02 03:36:35.079037 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-02 03:36:35.079041 | orchestrator | Monday 02 February 2026 03:36:27 +0000 (0:00:00.884) 0:10:41.509 ******* 2026-02-02 03:36:35.079044 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:36:35.079049 | orchestrator | 2026-02-02 03:36:35.079053 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-02 03:36:35.079058 | orchestrator | Monday 02 February 2026 03:36:27 +0000 (0:00:00.598) 0:10:42.108 ******* 2026-02-02 03:36:35.079063 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:36:35.079067 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:36:35.079071 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:36:35.079076 | orchestrator | 2026-02-02 03:36:35.079080 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-02 03:36:35.079085 | orchestrator | Monday 02 February 2026 03:36:29 +0000 (0:00:01.230) 0:10:43.338 ******* 2026-02-02 03:36:35.079093 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:36:35.079097 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:36:35.079102 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:36:35.079106 | orchestrator | 2026-02-02 03:36:35.079111 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-02 03:36:35.079115 | orchestrator | Monday 02 February 2026 03:36:30 +0000 (0:00:01.458) 0:10:44.797 ******* 2026-02-02 03:36:35.079120 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:36:35.079124 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:36:35.079129 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:36:35.079133 | orchestrator | 2026-02-02 03:36:35.079138 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-02 03:36:35.079142 | orchestrator | Monday 02 February 2026 03:36:32 +0000 (0:00:01.753) 0:10:46.550 ******* 2026-02-02 03:36:35.079147 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 03:36:35.079151 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 03:36:35.079158 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 03:36:38.738604 | orchestrator | 2026-02-02 03:36:38.738701 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 03:36:38.738716 | orchestrator | Monday 02 February 2026 03:36:35 +0000 (0:00:02.656) 0:10:49.207 ******* 2026-02-02 03:36:38.738724 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:36:38.738733 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:36:38.738739 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:36:38.738745 | orchestrator | 2026-02-02 03:36:38.738752 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-02 03:36:38.738758 | orchestrator | Monday 02 February 2026 03:36:35 +0000 (0:00:00.442) 0:10:49.650 ******* 2026-02-02 03:36:38.738766 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:36:38.738772 | orchestrator | 2026-02-02 03:36:38.738779 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-02 03:36:38.738786 | orchestrator | Monday 02 February 2026 03:36:36 +0000 (0:00:00.603) 0:10:50.254 ******* 2026-02-02 03:36:38.738792 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:36:38.738799 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:36:38.738805 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:36:38.738812 | orchestrator | 2026-02-02 03:36:38.738818 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-02 03:36:38.738824 | orchestrator | Monday 02 February 2026 03:36:36 +0000 (0:00:00.651) 0:10:50.906 ******* 2026-02-02 03:36:38.738831 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:36:38.738837 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:36:38.738843 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:36:38.738849 | orchestrator | 2026-02-02 03:36:38.738855 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-02 03:36:38.738862 | orchestrator | Monday 02 February 2026 03:36:37 +0000 (0:00:00.367) 0:10:51.273 ******* 2026-02-02 03:36:38.738868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:36:38.738875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:36:38.738881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:36:38.738888 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:36:38.738894 | orchestrator | 2026-02-02 03:36:38.738901 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-02 03:36:38.738907 | orchestrator | Monday 02 February 2026 03:36:37 +0000 (0:00:00.701) 0:10:51.974 ******* 2026-02-02 03:36:38.738914 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:36:38.738921 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:36:38.738951 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:36:38.738958 | orchestrator | 2026-02-02 03:36:38.738965 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:36:38.738973 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-02 03:36:38.738996 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-02 03:36:38.739004 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-02 03:36:38.739010 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-02 03:36:38.739017 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-02 03:36:38.739023 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-02 03:36:38.739030 | orchestrator | 2026-02-02 03:36:38.739036 | orchestrator | 2026-02-02 03:36:38.739043 | orchestrator | 2026-02-02 03:36:38.739050 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:36:38.739056 | orchestrator | Monday 02 February 2026 03:36:38 +0000 (0:00:00.286) 0:10:52.261 ******* 2026-02-02 03:36:38.739063 | orchestrator | =============================================================================== 2026-02-02 03:36:38.739069 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 60.68s 2026-02-02 03:36:38.739076 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.38s 2026-02-02 03:36:38.739083 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 27.85s 2026-02-02 03:36:38.739089 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.20s 2026-02-02 03:36:38.739096 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.14s 2026-02-02 03:36:38.739102 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.60s 2026-02-02 03:36:38.739108 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.65s 2026-02-02 03:36:38.739114 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.90s 2026-02-02 03:36:38.739121 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.04s 2026-02-02 03:36:38.739127 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.18s 2026-02-02 03:36:38.739133 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.37s 2026-02-02 03:36:38.739141 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.37s 2026-02-02 03:36:38.739149 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.80s 2026-02-02 03:36:38.739172 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.75s 2026-02-02 03:36:38.739181 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.89s 2026-02-02 03:36:38.739189 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.56s 2026-02-02 03:36:38.739197 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.51s 2026-02-02 03:36:38.739204 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.47s 2026-02-02 03:36:38.739211 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.41s 2026-02-02 03:36:38.739219 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.08s 2026-02-02 03:36:41.285889 | orchestrator | 2026-02-02 03:36:41 | INFO  | Task 6578866f-8fb5-4acb-ab75-ad8d1d661ef0 (ceph-pools) was prepared for execution. 2026-02-02 03:36:41.286008 | orchestrator | 2026-02-02 03:36:41 | INFO  | It takes a moment until task 6578866f-8fb5-4acb-ab75-ad8d1d661ef0 (ceph-pools) has been started and output is visible here. 2026-02-02 03:36:56.681200 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-02 03:36:56.681336 | orchestrator | 2.16.14 2026-02-02 03:36:56.681361 | orchestrator | 2026-02-02 03:36:56.681377 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-02 03:36:56.681394 | orchestrator | 2026-02-02 03:36:56.681409 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 03:36:56.681425 | orchestrator | Monday 02 February 2026 03:36:46 +0000 (0:00:00.759) 0:00:00.759 ******* 2026-02-02 03:36:56.681440 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:36:56.681456 | orchestrator | 2026-02-02 03:36:56.681472 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 03:36:56.681487 | orchestrator | Monday 02 February 2026 03:36:47 +0000 (0:00:00.747) 0:00:01.506 ******* 2026-02-02 03:36:56.681568 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:36:56.681585 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:36:56.681600 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:36:56.681615 | orchestrator | 2026-02-02 03:36:56.681630 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 03:36:56.681645 | orchestrator | Monday 02 February 2026 03:36:47 +0000 (0:00:00.636) 0:00:02.143 ******* 2026-02-02 03:36:56.681661 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:36:56.681675 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:36:56.681690 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:36:56.681705 | orchestrator | 2026-02-02 03:36:56.681720 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 03:36:56.681736 | orchestrator | Monday 02 February 2026 03:36:48 +0000 (0:00:00.352) 0:00:02.496 ******* 2026-02-02 03:36:56.681843 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:36:56.681861 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:36:56.681876 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:36:56.681892 | orchestrator | 2026-02-02 03:36:56.681926 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 03:36:56.681942 | orchestrator | Monday 02 February 2026 03:36:49 +0000 (0:00:00.946) 0:00:03.442 ******* 2026-02-02 03:36:56.681956 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:36:56.681970 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:36:56.681985 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:36:56.682000 | orchestrator | 2026-02-02 03:36:56.682015 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 03:36:56.682085 | orchestrator | Monday 02 February 2026 03:36:49 +0000 (0:00:00.367) 0:00:03.810 ******* 2026-02-02 03:36:56.682095 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:36:56.682104 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:36:56.682113 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:36:56.682121 | orchestrator | 2026-02-02 03:36:56.682131 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 03:36:56.682139 | orchestrator | Monday 02 February 2026 03:36:49 +0000 (0:00:00.365) 0:00:04.175 ******* 2026-02-02 03:36:56.682148 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:36:56.682157 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:36:56.682166 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:36:56.682174 | orchestrator | 2026-02-02 03:36:56.682184 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 03:36:56.682193 | orchestrator | Monday 02 February 2026 03:36:50 +0000 (0:00:00.338) 0:00:04.514 ******* 2026-02-02 03:36:56.682202 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:36:56.682212 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:36:56.682221 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:36:56.682229 | orchestrator | 2026-02-02 03:36:56.682238 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 03:36:56.682314 | orchestrator | Monday 02 February 2026 03:36:50 +0000 (0:00:00.607) 0:00:05.121 ******* 2026-02-02 03:36:56.682325 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:36:56.682333 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:36:56.682342 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:36:56.682351 | orchestrator | 2026-02-02 03:36:56.682359 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 03:36:56.682368 | orchestrator | Monday 02 February 2026 03:36:51 +0000 (0:00:00.337) 0:00:05.459 ******* 2026-02-02 03:36:56.682377 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 03:36:56.682386 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 03:36:56.682395 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 03:36:56.682403 | orchestrator | 2026-02-02 03:36:56.682412 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 03:36:56.682421 | orchestrator | Monday 02 February 2026 03:36:51 +0000 (0:00:00.750) 0:00:06.210 ******* 2026-02-02 03:36:56.682430 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:36:56.682438 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:36:56.682447 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:36:56.682456 | orchestrator | 2026-02-02 03:36:56.682465 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 03:36:56.682473 | orchestrator | Monday 02 February 2026 03:36:52 +0000 (0:00:00.477) 0:00:06.687 ******* 2026-02-02 03:36:56.682482 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 03:36:56.682490 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 03:36:56.682552 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 03:36:56.682562 | orchestrator | 2026-02-02 03:36:56.682571 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 03:36:56.682580 | orchestrator | Monday 02 February 2026 03:36:54 +0000 (0:00:02.107) 0:00:08.795 ******* 2026-02-02 03:36:56.682590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 03:36:56.682599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 03:36:56.682608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 03:36:56.682617 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:36:56.682626 | orchestrator | 2026-02-02 03:36:56.682655 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 03:36:56.682665 | orchestrator | Monday 02 February 2026 03:36:55 +0000 (0:00:00.741) 0:00:09.536 ******* 2026-02-02 03:36:56.682676 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 03:36:56.682688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 03:36:56.682698 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 03:36:56.682707 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:36:56.682716 | orchestrator | 2026-02-02 03:36:56.682725 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 03:36:56.682734 | orchestrator | Monday 02 February 2026 03:36:56 +0000 (0:00:01.125) 0:00:10.662 ******* 2026-02-02 03:36:56.682752 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 03:36:56.682773 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 03:36:56.682783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 03:36:56.682792 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:36:56.682801 | orchestrator | 2026-02-02 03:36:56.682810 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 03:36:56.682819 | orchestrator | Monday 02 February 2026 03:36:56 +0000 (0:00:00.169) 0:00:10.832 ******* 2026-02-02 03:36:56.682830 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fef826d0639c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 03:36:53.192259', 'end': '2026-02-02 03:36:53.234435', 'delta': '0:00:00.042176', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fef826d0639c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 03:36:56.682843 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a42e682d4965', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 03:36:53.724075', 'end': '2026-02-02 03:36:53.773337', 'delta': '0:00:00.049262', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a42e682d4965'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 03:36:56.682860 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '39d29fabc2d2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 03:36:54.265654', 'end': '2026-02-02 03:36:54.305884', 'delta': '0:00:00.040230', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['39d29fabc2d2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 03:37:04.027806 | orchestrator | 2026-02-02 03:37:04.027907 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 03:37:04.027918 | orchestrator | Monday 02 February 2026 03:36:56 +0000 (0:00:00.193) 0:00:11.025 ******* 2026-02-02 03:37:04.027953 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:37:04.027960 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:37:04.027966 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:37:04.027972 | orchestrator | 2026-02-02 03:37:04.027980 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 03:37:04.027989 | orchestrator | Monday 02 February 2026 03:36:57 +0000 (0:00:00.486) 0:00:11.512 ******* 2026-02-02 03:37:04.027998 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-02 03:37:04.028006 | orchestrator | 2026-02-02 03:37:04.028028 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 03:37:04.028037 | orchestrator | Monday 02 February 2026 03:36:58 +0000 (0:00:01.624) 0:00:13.136 ******* 2026-02-02 03:37:04.028046 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:04.028054 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:04.028063 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:04.028071 | orchestrator | 2026-02-02 03:37:04.028083 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 03:37:04.028092 | orchestrator | Monday 02 February 2026 03:36:59 +0000 (0:00:00.379) 0:00:13.516 ******* 2026-02-02 03:37:04.028100 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:04.028107 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:04.028116 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:04.028124 | orchestrator | 2026-02-02 03:37:04.028132 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 03:37:04.028140 | orchestrator | Monday 02 February 2026 03:37:00 +0000 (0:00:00.898) 0:00:14.414 ******* 2026-02-02 03:37:04.028147 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:04.028154 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:04.028162 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:04.028171 | orchestrator | 2026-02-02 03:37:04.028179 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 03:37:04.028186 | orchestrator | Monday 02 February 2026 03:37:00 +0000 (0:00:00.339) 0:00:14.753 ******* 2026-02-02 03:37:04.028194 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:37:04.028203 | orchestrator | 2026-02-02 03:37:04.028211 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 03:37:04.028219 | orchestrator | Monday 02 February 2026 03:37:00 +0000 (0:00:00.134) 0:00:14.887 ******* 2026-02-02 03:37:04.028226 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:04.028234 | orchestrator | 2026-02-02 03:37:04.028241 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 03:37:04.028249 | orchestrator | Monday 02 February 2026 03:37:00 +0000 (0:00:00.264) 0:00:15.152 ******* 2026-02-02 03:37:04.028256 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:04.028264 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:04.028272 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:04.028280 | orchestrator | 2026-02-02 03:37:04.028287 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 03:37:04.028295 | orchestrator | Monday 02 February 2026 03:37:01 +0000 (0:00:00.321) 0:00:15.473 ******* 2026-02-02 03:37:04.028303 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:04.028312 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:04.028319 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:04.028328 | orchestrator | 2026-02-02 03:37:04.028336 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 03:37:04.028344 | orchestrator | Monday 02 February 2026 03:37:01 +0000 (0:00:00.381) 0:00:15.855 ******* 2026-02-02 03:37:04.028354 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:04.028362 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:04.028370 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:04.028378 | orchestrator | 2026-02-02 03:37:04.028387 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 03:37:04.028395 | orchestrator | Monday 02 February 2026 03:37:02 +0000 (0:00:00.602) 0:00:16.457 ******* 2026-02-02 03:37:04.028412 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:04.028421 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:04.028430 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:04.028439 | orchestrator | 2026-02-02 03:37:04.028447 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 03:37:04.028456 | orchestrator | Monday 02 February 2026 03:37:02 +0000 (0:00:00.341) 0:00:16.799 ******* 2026-02-02 03:37:04.028465 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:04.028473 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:04.028481 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:04.028488 | orchestrator | 2026-02-02 03:37:04.028497 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 03:37:04.028555 | orchestrator | Monday 02 February 2026 03:37:02 +0000 (0:00:00.371) 0:00:17.170 ******* 2026-02-02 03:37:04.028565 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:04.028574 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:04.028582 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:04.028590 | orchestrator | 2026-02-02 03:37:04.028599 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 03:37:04.028608 | orchestrator | Monday 02 February 2026 03:37:03 +0000 (0:00:00.606) 0:00:17.776 ******* 2026-02-02 03:37:04.028616 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:04.028624 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:04.028632 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:04.028640 | orchestrator | 2026-02-02 03:37:04.028648 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 03:37:04.028656 | orchestrator | Monday 02 February 2026 03:37:03 +0000 (0:00:00.365) 0:00:18.142 ******* 2026-02-02 03:37:04.028686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379', 'dm-uuid-LVM-2Xx1rXy8ZvvzVeymXUM2Y23jmTeKUn30gyH8a84MHrJn7bcz7phSu8LEA3bm3DqO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.028707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a', 'dm-uuid-LVM-nQNI9mGSypmWJN7Kribh0RNL5qLQKFSceYxT4mfzBYfoYiha3ZzoEdYR0rTnnIvK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.028717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.028728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.028736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.028753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.028761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.028770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89', 'dm-uuid-LVM-bGXwDmNnGJLl15xDO66UDgeGoDbpg8C0HvMSdsO6YcSLb4aDqGATNEcOudg8iQom'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.028786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.076769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19', 'dm-uuid-LVM-7fojGdQjjxzlZ1d67G3lfXV0uQvvNrpG74l8TP6AWG5LY1LTlUkEVjmQPc2hTMkL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.076846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.076856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.076864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.076885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.076907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.076921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.076928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HOxmXw-N5cX-V1Nz-Lu3r-OQk9-N5gG-1syyTi', 'scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4', 'scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.076944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.076954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yf6lEa-f3nO-iewk-DEDy-Fb6j-Kq2P-dbkgMf', 'scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc', 'scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.076966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6', 'scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.076976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.076994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.198076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.198146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.198152 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:04.198158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.198180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.198196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AITawh-CkpC-7L3c-Vqqe-GXUP-7eEh-WwcXRH', 'scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5', 'scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.198205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QbZaLy-yUYT-ccut-PcI7-2pGL-9PmJ-6NoPFr', 'scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28', 'scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.198215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012', 'scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.198220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.198226 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:04.198230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f', 'dm-uuid-LVM-oyVS0lpzZeiZxxmfRvad67kbexmRBG5IWJAtRWtNBygZ9yUEjcaaQoSOl1TBvsQs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.198236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6', 'dm-uuid-LVM-o4NjfQidgd0d8Dt2ERSF2CVjMcc1iNdF2FL70XUBfeOz8qjNKOcDK13w6fcJ9Hta'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.198240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.198248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.538301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.538396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.538438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.538450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.538460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.538471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-02 03:37:04.538510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.538619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qjdzC2-uhmD-TpwQ-o3eu-AERk-xIpn-IuLEqz', 'scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40', 'scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.538634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-etyEN7-O4pu-QliJ-NKxv-0HLx-jIcx-JGZ0d7', 'scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b', 'scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.538645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359', 'scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.538657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-02 03:37:04.538669 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:04.538680 | orchestrator | 2026-02-02 03:37:04.538696 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 03:37:04.538720 | orchestrator | Monday 02 February 2026 03:37:04 +0000 (0:00:00.605) 0:00:18.748 ******* 2026-02-02 03:37:04.538754 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379', 'dm-uuid-LVM-2Xx1rXy8ZvvzVeymXUM2Y23jmTeKUn30gyH8a84MHrJn7bcz7phSu8LEA3bm3DqO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.657099 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a', 'dm-uuid-LVM-nQNI9mGSypmWJN7Kribh0RNL5qLQKFSceYxT4mfzBYfoYiha3ZzoEdYR0rTnnIvK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.657175 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.657184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.657189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.657194 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.657200 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.657246 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.657252 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.657258 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.657265 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.657286 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89', 'dm-uuid-LVM-bGXwDmNnGJLl15xDO66UDgeGoDbpg8C0HvMSdsO6YcSLb4aDqGATNEcOudg8iQom'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.808837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HOxmXw-N5cX-V1Nz-Lu3r-OQk9-N5gG-1syyTi', 'scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4', 'scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.808969 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19', 'dm-uuid-LVM-7fojGdQjjxzlZ1d67G3lfXV0uQvvNrpG74l8TP6AWG5LY1LTlUkEVjmQPc2hTMkL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.808999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yf6lEa-f3nO-iewk-DEDy-Fb6j-Kq2P-dbkgMf', 'scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc', 'scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.809029 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6', 'scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.809121 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.809143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.809156 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.809168 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.809179 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.809191 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.809211 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:04.809231 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.809253 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.935984 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.936076 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.936128 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AITawh-CkpC-7L3c-Vqqe-GXUP-7eEh-WwcXRH', 'scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5', 'scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.936167 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QbZaLy-yUYT-ccut-PcI7-2pGL-9PmJ-6NoPFr', 'scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28', 'scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.936184 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f', 'dm-uuid-LVM-oyVS0lpzZeiZxxmfRvad67kbexmRBG5IWJAtRWtNBygZ9yUEjcaaQoSOl1TBvsQs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.936200 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012', 'scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.936233 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.936254 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6', 'dm-uuid-LVM-o4NjfQidgd0d8Dt2ERSF2CVjMcc1iNdF2FL70XUBfeOz8qjNKOcDK13w6fcJ9Hta'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:04.936279 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:05.074866 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:05.074948 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:05.074961 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:05.074968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:05.074995 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:05.075013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:05.075020 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:05.075039 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:05.075050 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:05.075068 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qjdzC2-uhmD-TpwQ-o3eu-AERk-xIpn-IuLEqz', 'scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40', 'scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:05.075081 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-etyEN7-O4pu-QliJ-NKxv-0HLx-jIcx-JGZ0d7', 'scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b', 'scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:15.806613 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359', 'scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:15.806727 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-02-02-14-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-02 03:37:15.806761 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:15.806769 | orchestrator | 2026-02-02 03:37:15.806776 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 03:37:15.806784 | orchestrator | Monday 02 February 2026 03:37:05 +0000 (0:00:00.679) 0:00:19.427 ******* 2026-02-02 03:37:15.806790 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:37:15.806796 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:37:15.806802 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:37:15.806808 | orchestrator | 2026-02-02 03:37:15.806814 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 03:37:15.806819 | orchestrator | Monday 02 February 2026 03:37:06 +0000 (0:00:00.976) 0:00:20.404 ******* 2026-02-02 03:37:15.806825 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:37:15.806830 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:37:15.806836 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:37:15.806841 | orchestrator | 2026-02-02 03:37:15.806847 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 03:37:15.806853 | orchestrator | Monday 02 February 2026 03:37:06 +0000 (0:00:00.334) 0:00:20.739 ******* 2026-02-02 03:37:15.806859 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:37:15.806865 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:37:15.806871 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:37:15.806877 | orchestrator | 2026-02-02 03:37:15.806905 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 03:37:15.806912 | orchestrator | Monday 02 February 2026 03:37:07 +0000 (0:00:00.622) 0:00:21.362 ******* 2026-02-02 03:37:15.806917 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:15.806923 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:15.806929 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:15.806934 | orchestrator | 2026-02-02 03:37:15.806941 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 03:37:15.806947 | orchestrator | Monday 02 February 2026 03:37:07 +0000 (0:00:00.335) 0:00:21.697 ******* 2026-02-02 03:37:15.806953 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:15.806959 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:15.806964 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:15.806970 | orchestrator | 2026-02-02 03:37:15.806976 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 03:37:15.806982 | orchestrator | Monday 02 February 2026 03:37:08 +0000 (0:00:00.833) 0:00:22.530 ******* 2026-02-02 03:37:15.806988 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:15.806994 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:15.807000 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:15.807006 | orchestrator | 2026-02-02 03:37:15.807012 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 03:37:15.807017 | orchestrator | Monday 02 February 2026 03:37:08 +0000 (0:00:00.337) 0:00:22.868 ******* 2026-02-02 03:37:15.807024 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-02 03:37:15.807030 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-02 03:37:15.807036 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-02 03:37:15.807043 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-02 03:37:15.807049 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-02 03:37:15.807054 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-02 03:37:15.807060 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-02 03:37:15.807075 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-02 03:37:15.807081 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-02 03:37:15.807087 | orchestrator | 2026-02-02 03:37:15.807094 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 03:37:15.807101 | orchestrator | Monday 02 February 2026 03:37:09 +0000 (0:00:01.116) 0:00:23.985 ******* 2026-02-02 03:37:15.807123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 03:37:15.807129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 03:37:15.807136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 03:37:15.807142 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:15.807148 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-02 03:37:15.807153 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-02 03:37:15.807160 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-02 03:37:15.807166 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:15.807173 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-02 03:37:15.807179 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-02 03:37:15.807185 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-02 03:37:15.807191 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:15.807196 | orchestrator | 2026-02-02 03:37:15.807202 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 03:37:15.807208 | orchestrator | Monday 02 February 2026 03:37:10 +0000 (0:00:00.399) 0:00:24.384 ******* 2026-02-02 03:37:15.807215 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:37:15.807222 | orchestrator | 2026-02-02 03:37:15.807229 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 03:37:15.807237 | orchestrator | Monday 02 February 2026 03:37:10 +0000 (0:00:00.785) 0:00:25.169 ******* 2026-02-02 03:37:15.807243 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:15.807249 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:15.807254 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:15.807261 | orchestrator | 2026-02-02 03:37:15.807267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 03:37:15.807273 | orchestrator | Monday 02 February 2026 03:37:11 +0000 (0:00:00.328) 0:00:25.497 ******* 2026-02-02 03:37:15.807279 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:15.807285 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:15.807291 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:15.807297 | orchestrator | 2026-02-02 03:37:15.807304 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 03:37:15.807310 | orchestrator | Monday 02 February 2026 03:37:11 +0000 (0:00:00.335) 0:00:25.832 ******* 2026-02-02 03:37:15.807317 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:15.807323 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:37:15.807329 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:37:15.807335 | orchestrator | 2026-02-02 03:37:15.807341 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 03:37:15.807348 | orchestrator | Monday 02 February 2026 03:37:12 +0000 (0:00:00.565) 0:00:26.397 ******* 2026-02-02 03:37:15.807355 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:37:15.807361 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:37:15.807367 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:37:15.807373 | orchestrator | 2026-02-02 03:37:15.807379 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 03:37:15.807385 | orchestrator | Monday 02 February 2026 03:37:12 +0000 (0:00:00.456) 0:00:26.854 ******* 2026-02-02 03:37:15.807391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:37:15.807405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:37:15.807418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:37:15.807424 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:15.807430 | orchestrator | 2026-02-02 03:37:15.807436 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 03:37:15.807442 | orchestrator | Monday 02 February 2026 03:37:12 +0000 (0:00:00.387) 0:00:27.242 ******* 2026-02-02 03:37:15.807447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:37:15.807453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:37:15.807458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:37:15.807464 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:15.807469 | orchestrator | 2026-02-02 03:37:15.807475 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 03:37:15.807481 | orchestrator | Monday 02 February 2026 03:37:13 +0000 (0:00:00.402) 0:00:27.644 ******* 2026-02-02 03:37:15.807487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 03:37:15.807493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 03:37:15.807498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 03:37:15.807504 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:37:15.807510 | orchestrator | 2026-02-02 03:37:15.807516 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 03:37:15.807522 | orchestrator | Monday 02 February 2026 03:37:13 +0000 (0:00:00.390) 0:00:28.035 ******* 2026-02-02 03:37:15.807528 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:37:15.807534 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:37:15.807539 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:37:15.807545 | orchestrator | 2026-02-02 03:37:15.807573 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 03:37:15.807580 | orchestrator | Monday 02 February 2026 03:37:14 +0000 (0:00:00.353) 0:00:28.389 ******* 2026-02-02 03:37:15.807606 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 03:37:15.807612 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-02 03:37:15.807618 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-02 03:37:15.807624 | orchestrator | 2026-02-02 03:37:15.807630 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 03:37:15.807635 | orchestrator | Monday 02 February 2026 03:37:14 +0000 (0:00:00.838) 0:00:29.228 ******* 2026-02-02 03:37:15.807642 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 03:37:15.807661 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 03:38:51.764083 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 03:38:51.764197 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-02 03:38:51.764214 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 03:38:51.764228 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 03:38:51.764239 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 03:38:51.764251 | orchestrator | 2026-02-02 03:38:51.764263 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 03:38:51.764275 | orchestrator | Monday 02 February 2026 03:37:15 +0000 (0:00:00.927) 0:00:30.155 ******* 2026-02-02 03:38:51.764286 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 03:38:51.764297 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 03:38:51.764308 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 03:38:51.764319 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-02 03:38:51.764355 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 03:38:51.764367 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 03:38:51.764377 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 03:38:51.764388 | orchestrator | 2026-02-02 03:38:51.764399 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-02 03:38:51.764410 | orchestrator | Monday 02 February 2026 03:37:17 +0000 (0:00:01.841) 0:00:31.996 ******* 2026-02-02 03:38:51.764421 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:38:51.764432 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:38:51.764443 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-02 03:38:51.764454 | orchestrator | 2026-02-02 03:38:51.764465 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-02 03:38:51.764476 | orchestrator | Monday 02 February 2026 03:37:18 +0000 (0:00:00.443) 0:00:32.439 ******* 2026-02-02 03:38:51.764489 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 03:38:51.764503 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 03:38:51.764528 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 03:38:51.764539 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 03:38:51.764550 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-02 03:38:51.764561 | orchestrator | 2026-02-02 03:38:51.764575 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-02 03:38:51.764588 | orchestrator | Monday 02 February 2026 03:38:01 +0000 (0:00:43.193) 0:01:15.633 ******* 2026-02-02 03:38:51.764601 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.764613 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.764626 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.764639 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.764651 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.764664 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.764676 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-02 03:38:51.764689 | orchestrator | 2026-02-02 03:38:51.764702 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-02 03:38:51.764715 | orchestrator | Monday 02 February 2026 03:38:23 +0000 (0:00:22.364) 0:01:37.998 ******* 2026-02-02 03:38:51.764745 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.764766 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.764778 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.764788 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.764800 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.764864 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.764885 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 03:38:51.764903 | orchestrator | 2026-02-02 03:38:51.764922 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-02 03:38:51.764940 | orchestrator | Monday 02 February 2026 03:38:34 +0000 (0:00:10.973) 0:01:48.971 ******* 2026-02-02 03:38:51.764958 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.764976 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 03:38:51.764994 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 03:38:51.765012 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.765028 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 03:38:51.765044 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 03:38:51.765061 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.765078 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 03:38:51.765096 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 03:38:51.765114 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.765129 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 03:38:51.765146 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 03:38:51.765162 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.765179 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 03:38:51.765198 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 03:38:51.765215 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 03:38:51.765231 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 03:38:51.765248 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 03:38:51.765265 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-02 03:38:51.765283 | orchestrator | 2026-02-02 03:38:51.765301 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:38:51.765330 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-02 03:38:51.765352 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-02 03:38:51.765371 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-02 03:38:51.765383 | orchestrator | 2026-02-02 03:38:51.765394 | orchestrator | 2026-02-02 03:38:51.765405 | orchestrator | 2026-02-02 03:38:51.765416 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:38:51.765426 | orchestrator | Monday 02 February 2026 03:38:51 +0000 (0:00:17.119) 0:02:06.091 ******* 2026-02-02 03:38:51.765437 | orchestrator | =============================================================================== 2026-02-02 03:38:51.765458 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.19s 2026-02-02 03:38:51.765469 | orchestrator | generate keys ---------------------------------------------------------- 22.36s 2026-02-02 03:38:51.765480 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.12s 2026-02-02 03:38:51.765491 | orchestrator | get keys from monitors ------------------------------------------------- 10.97s 2026-02-02 03:38:51.765501 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.11s 2026-02-02 03:38:51.765512 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.84s 2026-02-02 03:38:51.765523 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.62s 2026-02-02 03:38:51.765533 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.13s 2026-02-02 03:38:51.765544 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.12s 2026-02-02 03:38:51.765555 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.98s 2026-02-02 03:38:51.765566 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.95s 2026-02-02 03:38:51.765576 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.93s 2026-02-02 03:38:51.765587 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.90s 2026-02-02 03:38:51.765610 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.84s 2026-02-02 03:38:52.167708 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.83s 2026-02-02 03:38:52.167831 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.79s 2026-02-02 03:38:52.167847 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.75s 2026-02-02 03:38:52.167857 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.75s 2026-02-02 03:38:52.167868 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.74s 2026-02-02 03:38:52.167878 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.68s 2026-02-02 03:38:54.796421 | orchestrator | 2026-02-02 03:38:54 | INFO  | Task 1cca8c39-ef9b-4404-86cb-80015f445734 (copy-ceph-keys) was prepared for execution. 2026-02-02 03:38:54.796524 | orchestrator | 2026-02-02 03:38:54 | INFO  | It takes a moment until task 1cca8c39-ef9b-4404-86cb-80015f445734 (copy-ceph-keys) has been started and output is visible here. 2026-02-02 03:39:34.477272 | orchestrator | 2026-02-02 03:39:34.477372 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-02 03:39:34.477386 | orchestrator | 2026-02-02 03:39:34.477396 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-02 03:39:34.477405 | orchestrator | Monday 02 February 2026 03:38:59 +0000 (0:00:00.176) 0:00:00.176 ******* 2026-02-02 03:39:34.477414 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-02 03:39:34.477423 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-02 03:39:34.477432 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-02 03:39:34.477440 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-02 03:39:34.477448 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-02 03:39:34.477456 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-02 03:39:34.477464 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-02 03:39:34.477472 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-02 03:39:34.477501 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-02 03:39:34.477510 | orchestrator | 2026-02-02 03:39:34.477518 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-02 03:39:34.477538 | orchestrator | Monday 02 February 2026 03:39:04 +0000 (0:00:04.507) 0:00:04.683 ******* 2026-02-02 03:39:34.477547 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-02 03:39:34.477567 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-02 03:39:34.477575 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-02 03:39:34.477583 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-02 03:39:34.477591 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-02 03:39:34.477599 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-02 03:39:34.477607 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-02 03:39:34.477615 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-02 03:39:34.477622 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-02 03:39:34.477630 | orchestrator | 2026-02-02 03:39:34.477638 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-02 03:39:34.477646 | orchestrator | Monday 02 February 2026 03:39:08 +0000 (0:00:04.162) 0:00:08.846 ******* 2026-02-02 03:39:34.477655 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-02 03:39:34.477664 | orchestrator | 2026-02-02 03:39:34.477672 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-02 03:39:34.477680 | orchestrator | Monday 02 February 2026 03:39:09 +0000 (0:00:01.035) 0:00:09.881 ******* 2026-02-02 03:39:34.477688 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-02 03:39:34.477696 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-02 03:39:34.477705 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-02 03:39:34.477713 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-02 03:39:34.477721 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-02 03:39:34.477729 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-02 03:39:34.477737 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-02 03:39:34.477745 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-02 03:39:34.477752 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-02 03:39:34.477760 | orchestrator | 2026-02-02 03:39:34.477768 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-02 03:39:34.477776 | orchestrator | Monday 02 February 2026 03:39:23 +0000 (0:00:14.686) 0:00:24.568 ******* 2026-02-02 03:39:34.477784 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-02 03:39:34.477792 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-02 03:39:34.477800 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-02 03:39:34.477808 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-02 03:39:34.477830 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-02 03:39:34.477847 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-02 03:39:34.477856 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-02 03:39:34.477865 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-02 03:39:34.477874 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-02 03:39:34.477883 | orchestrator | 2026-02-02 03:39:34.477893 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-02 03:39:34.477902 | orchestrator | Monday 02 February 2026 03:39:27 +0000 (0:00:03.114) 0:00:27.682 ******* 2026-02-02 03:39:34.477945 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-02 03:39:34.477962 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-02 03:39:34.477978 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-02 03:39:34.477992 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-02 03:39:34.478006 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-02 03:39:34.478062 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-02 03:39:34.478073 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-02 03:39:34.478082 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-02 03:39:34.478091 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-02 03:39:34.478098 | orchestrator | 2026-02-02 03:39:34.478107 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:39:34.478120 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:39:34.478129 | orchestrator | 2026-02-02 03:39:34.478137 | orchestrator | 2026-02-02 03:39:34.478145 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:39:34.478153 | orchestrator | Monday 02 February 2026 03:39:34 +0000 (0:00:07.013) 0:00:34.696 ******* 2026-02-02 03:39:34.478161 | orchestrator | =============================================================================== 2026-02-02 03:39:34.478169 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.69s 2026-02-02 03:39:34.478177 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.01s 2026-02-02 03:39:34.478185 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.51s 2026-02-02 03:39:34.478192 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.16s 2026-02-02 03:39:34.478200 | orchestrator | Check if target directories exist --------------------------------------- 3.11s 2026-02-02 03:39:34.478208 | orchestrator | Create share directory -------------------------------------------------- 1.04s 2026-02-02 03:39:47.060620 | orchestrator | 2026-02-02 03:39:47 | INFO  | Task c5dc912e-8ecb-4788-afb6-20ac925bdab8 (cephclient) was prepared for execution. 2026-02-02 03:39:47.060737 | orchestrator | 2026-02-02 03:39:47 | INFO  | It takes a moment until task c5dc912e-8ecb-4788-afb6-20ac925bdab8 (cephclient) has been started and output is visible here. 2026-02-02 03:40:46.705242 | orchestrator | 2026-02-02 03:40:46.705362 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-02 03:40:46.705382 | orchestrator | 2026-02-02 03:40:46.705394 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-02 03:40:46.705405 | orchestrator | Monday 02 February 2026 03:39:51 +0000 (0:00:00.256) 0:00:00.256 ******* 2026-02-02 03:40:46.705416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-02 03:40:46.705453 | orchestrator | 2026-02-02 03:40:46.705466 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-02 03:40:46.705476 | orchestrator | Monday 02 February 2026 03:39:51 +0000 (0:00:00.258) 0:00:00.515 ******* 2026-02-02 03:40:46.705489 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-02 03:40:46.705499 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-02 03:40:46.705510 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-02 03:40:46.705519 | orchestrator | 2026-02-02 03:40:46.705528 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-02 03:40:46.705537 | orchestrator | Monday 02 February 2026 03:39:53 +0000 (0:00:01.404) 0:00:01.919 ******* 2026-02-02 03:40:46.705547 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-02 03:40:46.705556 | orchestrator | 2026-02-02 03:40:46.705565 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-02 03:40:46.705574 | orchestrator | Monday 02 February 2026 03:39:54 +0000 (0:00:01.521) 0:00:03.441 ******* 2026-02-02 03:40:46.705584 | orchestrator | changed: [testbed-manager] 2026-02-02 03:40:46.705593 | orchestrator | 2026-02-02 03:40:46.705603 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-02 03:40:46.705612 | orchestrator | Monday 02 February 2026 03:39:55 +0000 (0:00:00.974) 0:00:04.416 ******* 2026-02-02 03:40:46.705622 | orchestrator | changed: [testbed-manager] 2026-02-02 03:40:46.705631 | orchestrator | 2026-02-02 03:40:46.705641 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-02 03:40:46.705650 | orchestrator | Monday 02 February 2026 03:39:56 +0000 (0:00:00.934) 0:00:05.351 ******* 2026-02-02 03:40:46.705660 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-02 03:40:46.705669 | orchestrator | ok: [testbed-manager] 2026-02-02 03:40:46.705679 | orchestrator | 2026-02-02 03:40:46.705688 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-02 03:40:46.705698 | orchestrator | Monday 02 February 2026 03:40:36 +0000 (0:00:39.759) 0:00:45.110 ******* 2026-02-02 03:40:46.705708 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-02 03:40:46.705719 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-02 03:40:46.705730 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-02 03:40:46.705741 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-02 03:40:46.705751 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-02 03:40:46.705760 | orchestrator | 2026-02-02 03:40:46.705771 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-02 03:40:46.705782 | orchestrator | Monday 02 February 2026 03:40:40 +0000 (0:00:04.173) 0:00:49.283 ******* 2026-02-02 03:40:46.705793 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-02 03:40:46.705803 | orchestrator | 2026-02-02 03:40:46.705814 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-02 03:40:46.705824 | orchestrator | Monday 02 February 2026 03:40:41 +0000 (0:00:00.475) 0:00:49.759 ******* 2026-02-02 03:40:46.705835 | orchestrator | skipping: [testbed-manager] 2026-02-02 03:40:46.705846 | orchestrator | 2026-02-02 03:40:46.705856 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-02 03:40:46.705867 | orchestrator | Monday 02 February 2026 03:40:41 +0000 (0:00:00.150) 0:00:49.909 ******* 2026-02-02 03:40:46.705877 | orchestrator | skipping: [testbed-manager] 2026-02-02 03:40:46.705888 | orchestrator | 2026-02-02 03:40:46.705899 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-02 03:40:46.705909 | orchestrator | Monday 02 February 2026 03:40:41 +0000 (0:00:00.578) 0:00:50.487 ******* 2026-02-02 03:40:46.705937 | orchestrator | changed: [testbed-manager] 2026-02-02 03:40:46.705949 | orchestrator | 2026-02-02 03:40:46.705960 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-02 03:40:46.705987 | orchestrator | Monday 02 February 2026 03:40:43 +0000 (0:00:01.482) 0:00:51.970 ******* 2026-02-02 03:40:46.705999 | orchestrator | changed: [testbed-manager] 2026-02-02 03:40:46.706010 | orchestrator | 2026-02-02 03:40:46.706140 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-02 03:40:46.706151 | orchestrator | Monday 02 February 2026 03:40:44 +0000 (0:00:00.770) 0:00:52.741 ******* 2026-02-02 03:40:46.706161 | orchestrator | changed: [testbed-manager] 2026-02-02 03:40:46.706171 | orchestrator | 2026-02-02 03:40:46.706181 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-02 03:40:46.706191 | orchestrator | Monday 02 February 2026 03:40:44 +0000 (0:00:00.561) 0:00:53.302 ******* 2026-02-02 03:40:46.706201 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-02 03:40:46.706210 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-02 03:40:46.706220 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-02 03:40:46.706230 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-02 03:40:46.706241 | orchestrator | 2026-02-02 03:40:46.706251 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:40:46.706261 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 03:40:46.706274 | orchestrator | 2026-02-02 03:40:46.706284 | orchestrator | 2026-02-02 03:40:46.706315 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:40:46.706325 | orchestrator | Monday 02 February 2026 03:40:46 +0000 (0:00:01.531) 0:00:54.833 ******* 2026-02-02 03:40:46.706334 | orchestrator | =============================================================================== 2026-02-02 03:40:46.706343 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.76s 2026-02-02 03:40:46.706352 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.17s 2026-02-02 03:40:46.706361 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.53s 2026-02-02 03:40:46.706370 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.52s 2026-02-02 03:40:46.706380 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.48s 2026-02-02 03:40:46.706391 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.40s 2026-02-02 03:40:46.706400 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.97s 2026-02-02 03:40:46.706409 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.93s 2026-02-02 03:40:46.706418 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.77s 2026-02-02 03:40:46.706428 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.58s 2026-02-02 03:40:46.706437 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.56s 2026-02-02 03:40:46.706446 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2026-02-02 03:40:46.706455 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2026-02-02 03:40:46.706465 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-02-02 03:40:49.250600 | orchestrator | 2026-02-02 03:40:49 | INFO  | Task 3973d1dd-319f-4f39-a611-61ea6f9de130 (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-02 03:40:49.250702 | orchestrator | 2026-02-02 03:40:49 | INFO  | It takes a moment until task 3973d1dd-319f-4f39-a611-61ea6f9de130 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-02 03:42:06.492899 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-02 03:42:06.493018 | orchestrator | 2.16.14 2026-02-02 03:42:06.493044 | orchestrator | 2026-02-02 03:42:06.493057 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-02 03:42:06.493068 | orchestrator | 2026-02-02 03:42:06.493078 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-02 03:42:06.493191 | orchestrator | Monday 02 February 2026 03:40:54 +0000 (0:00:00.333) 0:00:00.333 ******* 2026-02-02 03:42:06.493205 | orchestrator | changed: [testbed-manager] 2026-02-02 03:42:06.493266 | orchestrator | 2026-02-02 03:42:06.493279 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-02 03:42:06.493289 | orchestrator | Monday 02 February 2026 03:40:56 +0000 (0:00:02.137) 0:00:02.470 ******* 2026-02-02 03:42:06.493298 | orchestrator | changed: [testbed-manager] 2026-02-02 03:42:06.493308 | orchestrator | 2026-02-02 03:42:06.493318 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-02 03:42:06.493328 | orchestrator | Monday 02 February 2026 03:40:57 +0000 (0:00:01.122) 0:00:03.593 ******* 2026-02-02 03:42:06.493337 | orchestrator | changed: [testbed-manager] 2026-02-02 03:42:06.493347 | orchestrator | 2026-02-02 03:42:06.493357 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-02 03:42:06.493366 | orchestrator | Monday 02 February 2026 03:40:58 +0000 (0:00:01.069) 0:00:04.663 ******* 2026-02-02 03:42:06.493376 | orchestrator | changed: [testbed-manager] 2026-02-02 03:42:06.493386 | orchestrator | 2026-02-02 03:42:06.493395 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-02 03:42:06.493405 | orchestrator | Monday 02 February 2026 03:40:59 +0000 (0:00:01.276) 0:00:05.939 ******* 2026-02-02 03:42:06.493414 | orchestrator | changed: [testbed-manager] 2026-02-02 03:42:06.493424 | orchestrator | 2026-02-02 03:42:06.493433 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-02 03:42:06.493444 | orchestrator | Monday 02 February 2026 03:41:00 +0000 (0:00:01.158) 0:00:07.098 ******* 2026-02-02 03:42:06.493470 | orchestrator | changed: [testbed-manager] 2026-02-02 03:42:06.493482 | orchestrator | 2026-02-02 03:42:06.493494 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-02 03:42:06.493505 | orchestrator | Monday 02 February 2026 03:41:02 +0000 (0:00:01.079) 0:00:08.177 ******* 2026-02-02 03:42:06.493517 | orchestrator | changed: [testbed-manager] 2026-02-02 03:42:06.493528 | orchestrator | 2026-02-02 03:42:06.493539 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-02 03:42:06.493551 | orchestrator | Monday 02 February 2026 03:41:04 +0000 (0:00:02.103) 0:00:10.280 ******* 2026-02-02 03:42:06.493563 | orchestrator | changed: [testbed-manager] 2026-02-02 03:42:06.493572 | orchestrator | 2026-02-02 03:42:06.493582 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-02 03:42:06.493592 | orchestrator | Monday 02 February 2026 03:41:05 +0000 (0:00:01.273) 0:00:11.554 ******* 2026-02-02 03:42:06.493601 | orchestrator | changed: [testbed-manager] 2026-02-02 03:42:06.493611 | orchestrator | 2026-02-02 03:42:06.493620 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-02 03:42:06.493630 | orchestrator | Monday 02 February 2026 03:41:41 +0000 (0:00:35.994) 0:00:47.549 ******* 2026-02-02 03:42:06.493640 | orchestrator | skipping: [testbed-manager] 2026-02-02 03:42:06.493649 | orchestrator | 2026-02-02 03:42:06.493659 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-02 03:42:06.493668 | orchestrator | 2026-02-02 03:42:06.493678 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-02 03:42:06.493688 | orchestrator | Monday 02 February 2026 03:41:41 +0000 (0:00:00.184) 0:00:47.734 ******* 2026-02-02 03:42:06.493697 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:42:06.493707 | orchestrator | 2026-02-02 03:42:06.493716 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-02 03:42:06.493726 | orchestrator | 2026-02-02 03:42:06.493735 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-02 03:42:06.493745 | orchestrator | Monday 02 February 2026 03:41:53 +0000 (0:00:11.781) 0:00:59.516 ******* 2026-02-02 03:42:06.493754 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:42:06.493764 | orchestrator | 2026-02-02 03:42:06.493774 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-02 03:42:06.493793 | orchestrator | 2026-02-02 03:42:06.493803 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-02 03:42:06.493812 | orchestrator | Monday 02 February 2026 03:42:04 +0000 (0:00:11.264) 0:01:10.780 ******* 2026-02-02 03:42:06.493823 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:42:06.493833 | orchestrator | 2026-02-02 03:42:06.493843 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:42:06.493853 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 03:42:06.493865 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:42:06.493875 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:42:06.493885 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:42:06.493894 | orchestrator | 2026-02-02 03:42:06.493904 | orchestrator | 2026-02-02 03:42:06.493913 | orchestrator | 2026-02-02 03:42:06.493923 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:42:06.493933 | orchestrator | Monday 02 February 2026 03:42:06 +0000 (0:00:01.389) 0:01:12.170 ******* 2026-02-02 03:42:06.493942 | orchestrator | =============================================================================== 2026-02-02 03:42:06.493952 | orchestrator | Create admin user ------------------------------------------------------ 35.99s 2026-02-02 03:42:06.493980 | orchestrator | Restart ceph manager service ------------------------------------------- 24.44s 2026-02-02 03:42:06.493991 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.14s 2026-02-02 03:42:06.494001 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.10s 2026-02-02 03:42:06.494010 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.28s 2026-02-02 03:42:06.494076 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.27s 2026-02-02 03:42:06.494087 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.16s 2026-02-02 03:42:06.494097 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.12s 2026-02-02 03:42:06.494106 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.08s 2026-02-02 03:42:06.494116 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.07s 2026-02-02 03:42:06.494126 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2026-02-02 03:42:06.866002 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-02 03:42:09.032648 | orchestrator | 2026-02-02 03:42:09 | INFO  | Task 79bd8940-ca53-46f8-8190-133b2ee64f1b (keystone) was prepared for execution. 2026-02-02 03:42:09.032813 | orchestrator | 2026-02-02 03:42:09 | INFO  | It takes a moment until task 79bd8940-ca53-46f8-8190-133b2ee64f1b (keystone) has been started and output is visible here. 2026-02-02 03:42:16.521757 | orchestrator | 2026-02-02 03:42:16.521868 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 03:42:16.521883 | orchestrator | 2026-02-02 03:42:16.521893 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 03:42:16.521916 | orchestrator | Monday 02 February 2026 03:42:13 +0000 (0:00:00.274) 0:00:00.274 ******* 2026-02-02 03:42:16.521925 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:42:16.521934 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:42:16.521942 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:42:16.521950 | orchestrator | 2026-02-02 03:42:16.521958 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 03:42:16.521966 | orchestrator | Monday 02 February 2026 03:42:13 +0000 (0:00:00.346) 0:00:00.621 ******* 2026-02-02 03:42:16.521994 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-02 03:42:16.522003 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-02 03:42:16.522011 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-02 03:42:16.522049 | orchestrator | 2026-02-02 03:42:16.522063 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-02 03:42:16.522077 | orchestrator | 2026-02-02 03:42:16.522090 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-02 03:42:16.522105 | orchestrator | Monday 02 February 2026 03:42:14 +0000 (0:00:00.481) 0:00:01.102 ******* 2026-02-02 03:42:16.522118 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:42:16.522131 | orchestrator | 2026-02-02 03:42:16.522143 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-02 03:42:16.522154 | orchestrator | Monday 02 February 2026 03:42:14 +0000 (0:00:00.605) 0:00:01.707 ******* 2026-02-02 03:42:16.522171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:42:16.522187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:42:16.522233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:42:16.522290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 03:42:16.522306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 03:42:16.522320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 03:42:16.522334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:42:16.522348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:42:16.522363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:42:16.522387 | orchestrator | 2026-02-02 03:42:16.522403 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-02 03:42:16.522427 | orchestrator | Monday 02 February 2026 03:42:16 +0000 (0:00:01.557) 0:00:03.265 ******* 2026-02-02 03:42:22.278074 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:42:22.278165 | orchestrator | 2026-02-02 03:42:22.278176 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-02 03:42:22.278198 | orchestrator | Monday 02 February 2026 03:42:16 +0000 (0:00:00.331) 0:00:03.597 ******* 2026-02-02 03:42:22.278206 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:42:22.278214 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:42:22.278221 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:42:22.278229 | orchestrator | 2026-02-02 03:42:22.278236 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-02 03:42:22.278298 | orchestrator | Monday 02 February 2026 03:42:17 +0000 (0:00:00.322) 0:00:03.919 ******* 2026-02-02 03:42:22.278313 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 03:42:22.278327 | orchestrator | 2026-02-02 03:42:22.278335 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-02 03:42:22.278342 | orchestrator | Monday 02 February 2026 03:42:17 +0000 (0:00:00.832) 0:00:04.752 ******* 2026-02-02 03:42:22.278351 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:42:22.278358 | orchestrator | 2026-02-02 03:42:22.278365 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-02 03:42:22.278373 | orchestrator | Monday 02 February 2026 03:42:18 +0000 (0:00:00.573) 0:00:05.326 ******* 2026-02-02 03:42:22.278384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:42:22.278395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:42:22.278405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:42:22.278453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 03:42:22.278464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 03:42:22.278472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 03:42:22.278480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:42:22.278487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:42:22.278502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:42:22.278510 | orchestrator | 2026-02-02 03:42:22.278517 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-02 03:42:22.278525 | orchestrator | Monday 02 February 2026 03:42:21 +0000 (0:00:03.048) 0:00:08.374 ******* 2026-02-02 03:42:22.278539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-02 03:42:23.202175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:42:23.202422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:42:23.202460 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:42:23.202487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-02 03:42:23.202544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:42:23.202572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:42:23.202591 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:42:23.202637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-02 03:42:23.202660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:42:23.202680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:42:23.202718 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:42:23.202736 | orchestrator | 2026-02-02 03:42:23.202757 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-02 03:42:23.202779 | orchestrator | Monday 02 February 2026 03:42:22 +0000 (0:00:00.657) 0:00:09.032 ******* 2026-02-02 03:42:23.202802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-02 03:42:23.202832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:42:23.202869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:42:26.397912 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:42:26.398065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-02 03:42:26.398083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:42:26.398115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:42:26.398122 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:42:26.398139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-02 03:42:26.398144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:42:26.398163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:42:26.398169 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:42:26.398175 | orchestrator | 2026-02-02 03:42:26.398184 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-02 03:42:26.398194 | orchestrator | Monday 02 February 2026 03:42:23 +0000 (0:00:00.899) 0:00:09.932 ******* 2026-02-02 03:42:26.398202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:42:26.398215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:42:26.398236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:42:26.398250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 03:42:31.233132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 03:42:31.233302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 03:42:31.233326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:42:31.233346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:42:31.233375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:42:31.233387 | orchestrator | 2026-02-02 03:42:31.233401 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-02 03:42:31.233415 | orchestrator | Monday 02 February 2026 03:42:26 +0000 (0:00:03.212) 0:00:13.145 ******* 2026-02-02 03:42:31.233480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:42:31.233494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:42:31.233519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:42:31.233533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:42:31.233553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:42:31.233575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:42:34.917843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:42:34.917934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:42:34.917940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:42:34.917946 | orchestrator | 2026-02-02 03:42:34.917952 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-02 03:42:34.917958 | orchestrator | Monday 02 February 2026 03:42:31 +0000 (0:00:04.835) 0:00:17.980 ******* 2026-02-02 03:42:34.917962 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:42:34.917967 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:42:34.917972 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:42:34.917976 | orchestrator | 2026-02-02 03:42:34.917980 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-02 03:42:34.917984 | orchestrator | Monday 02 February 2026 03:42:32 +0000 (0:00:01.447) 0:00:19.427 ******* 2026-02-02 03:42:34.917988 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:42:34.917992 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:42:34.917996 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:42:34.918000 | orchestrator | 2026-02-02 03:42:34.918004 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-02 03:42:34.918008 | orchestrator | Monday 02 February 2026 03:42:33 +0000 (0:00:00.610) 0:00:20.038 ******* 2026-02-02 03:42:34.918012 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:42:34.918050 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:42:34.918054 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:42:34.918058 | orchestrator | 2026-02-02 03:42:34.918072 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-02 03:42:34.918076 | orchestrator | Monday 02 February 2026 03:42:33 +0000 (0:00:00.571) 0:00:20.609 ******* 2026-02-02 03:42:34.918080 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:42:34.918084 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:42:34.918088 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:42:34.918091 | orchestrator | 2026-02-02 03:42:34.918096 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-02 03:42:34.918099 | orchestrator | Monday 02 February 2026 03:42:34 +0000 (0:00:00.354) 0:00:20.964 ******* 2026-02-02 03:42:34.918115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-02 03:42:34.918124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:42:34.918130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:42:34.918134 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:42:34.918138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-02 03:42:34.918145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:42:34.918149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:42:34.918167 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:42:34.918175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-02 03:42:54.016205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 03:42:54.016341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 03:42:54.016353 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:42:54.016362 | orchestrator | 2026-02-02 03:42:54.016369 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-02 03:42:54.016376 | orchestrator | Monday 02 February 2026 03:42:34 +0000 (0:00:00.703) 0:00:21.667 ******* 2026-02-02 03:42:54.016383 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:42:54.016392 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:42:54.016401 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:42:54.016410 | orchestrator | 2026-02-02 03:42:54.016418 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-02 03:42:54.016428 | orchestrator | Monday 02 February 2026 03:42:35 +0000 (0:00:00.321) 0:00:21.989 ******* 2026-02-02 03:42:54.016437 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-02 03:42:54.016448 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-02 03:42:54.016478 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-02 03:42:54.016487 | orchestrator | 2026-02-02 03:42:54.016511 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-02 03:42:54.016520 | orchestrator | Monday 02 February 2026 03:42:37 +0000 (0:00:01.873) 0:00:23.862 ******* 2026-02-02 03:42:54.016528 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 03:42:54.016537 | orchestrator | 2026-02-02 03:42:54.016545 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-02 03:42:54.016554 | orchestrator | Monday 02 February 2026 03:42:38 +0000 (0:00:01.075) 0:00:24.938 ******* 2026-02-02 03:42:54.016562 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:42:54.016571 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:42:54.016579 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:42:54.016587 | orchestrator | 2026-02-02 03:42:54.016596 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-02 03:42:54.016606 | orchestrator | Monday 02 February 2026 03:42:38 +0000 (0:00:00.622) 0:00:25.561 ******* 2026-02-02 03:42:54.016614 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-02 03:42:54.016625 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 03:42:54.016639 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-02 03:42:54.016647 | orchestrator | 2026-02-02 03:42:54.016656 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-02 03:42:54.016666 | orchestrator | Monday 02 February 2026 03:42:39 +0000 (0:00:01.080) 0:00:26.641 ******* 2026-02-02 03:42:54.016675 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:42:54.016684 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:42:54.016692 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:42:54.016701 | orchestrator | 2026-02-02 03:42:54.016708 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-02 03:42:54.016717 | orchestrator | Monday 02 February 2026 03:42:40 +0000 (0:00:00.555) 0:00:27.196 ******* 2026-02-02 03:42:54.016726 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-02 03:42:54.016735 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-02 03:42:54.016745 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-02 03:42:54.016754 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-02 03:42:54.016763 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-02 03:42:54.016773 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-02 03:42:54.016783 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-02 03:42:54.016793 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-02 03:42:54.016822 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-02 03:42:54.016832 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-02 03:42:54.016842 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-02 03:42:54.016853 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-02 03:42:54.016859 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-02 03:42:54.016866 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-02 03:42:54.016872 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-02 03:42:54.016879 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-02 03:42:54.016894 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-02 03:42:54.016901 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-02 03:42:54.016907 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-02 03:42:54.016914 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-02 03:42:54.016921 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-02 03:42:54.016927 | orchestrator | 2026-02-02 03:42:54.016934 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-02 03:42:54.016941 | orchestrator | Monday 02 February 2026 03:42:49 +0000 (0:00:08.794) 0:00:35.990 ******* 2026-02-02 03:42:54.016948 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-02 03:42:54.016954 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-02 03:42:54.016961 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-02 03:42:54.016968 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-02 03:42:54.016974 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-02 03:42:54.016980 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-02 03:42:54.016987 | orchestrator | 2026-02-02 03:42:54.016993 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-02 03:42:54.017006 | orchestrator | Monday 02 February 2026 03:42:51 +0000 (0:00:02.540) 0:00:38.531 ******* 2026-02-02 03:42:54.017016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:42:54.017031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:44:30.163058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-02 03:44:30.163197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 03:44:30.163232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 03:44:30.163245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-02 03:44:30.163256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:44:30.163283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:44:30.163303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-02 03:44:30.163311 | orchestrator | 2026-02-02 03:44:30.163319 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-02 03:44:30.163327 | orchestrator | Monday 02 February 2026 03:42:54 +0000 (0:00:02.233) 0:00:40.765 ******* 2026-02-02 03:44:30.163334 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:44:30.163341 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:44:30.163347 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:44:30.163353 | orchestrator | 2026-02-02 03:44:30.163360 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-02 03:44:30.163366 | orchestrator | Monday 02 February 2026 03:42:54 +0000 (0:00:00.570) 0:00:41.335 ******* 2026-02-02 03:44:30.163372 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:44:30.163379 | orchestrator | 2026-02-02 03:44:30.163385 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-02 03:44:30.163391 | orchestrator | Monday 02 February 2026 03:42:56 +0000 (0:00:02.213) 0:00:43.548 ******* 2026-02-02 03:44:30.163397 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:44:30.163403 | orchestrator | 2026-02-02 03:44:30.163410 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-02 03:44:30.163416 | orchestrator | Monday 02 February 2026 03:42:58 +0000 (0:00:02.150) 0:00:45.699 ******* 2026-02-02 03:44:30.163422 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:44:30.163428 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:44:30.163434 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:44:30.163440 | orchestrator | 2026-02-02 03:44:30.163472 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-02 03:44:30.163483 | orchestrator | Monday 02 February 2026 03:42:59 +0000 (0:00:00.823) 0:00:46.522 ******* 2026-02-02 03:44:30.163494 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:44:30.163503 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:44:30.163513 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:44:30.163532 | orchestrator | 2026-02-02 03:44:30.163539 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-02 03:44:30.163551 | orchestrator | Monday 02 February 2026 03:43:00 +0000 (0:00:00.349) 0:00:46.872 ******* 2026-02-02 03:44:30.163557 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:44:30.163564 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:44:30.163570 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:44:30.163583 | orchestrator | 2026-02-02 03:44:30.163589 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-02 03:44:30.163595 | orchestrator | Monday 02 February 2026 03:43:00 +0000 (0:00:00.399) 0:00:47.271 ******* 2026-02-02 03:44:30.163602 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:44:30.163609 | orchestrator | 2026-02-02 03:44:30.163616 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-02 03:44:30.163623 | orchestrator | Monday 02 February 2026 03:43:14 +0000 (0:00:14.304) 0:01:01.575 ******* 2026-02-02 03:44:30.163630 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:44:30.163638 | orchestrator | 2026-02-02 03:44:30.163645 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-02 03:44:30.163652 | orchestrator | Monday 02 February 2026 03:43:24 +0000 (0:00:09.645) 0:01:11.221 ******* 2026-02-02 03:44:30.163672 | orchestrator | 2026-02-02 03:44:30.163679 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-02 03:44:30.163686 | orchestrator | Monday 02 February 2026 03:43:24 +0000 (0:00:00.093) 0:01:11.315 ******* 2026-02-02 03:44:30.163694 | orchestrator | 2026-02-02 03:44:30.163701 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-02 03:44:30.163709 | orchestrator | Monday 02 February 2026 03:43:24 +0000 (0:00:00.071) 0:01:11.386 ******* 2026-02-02 03:44:30.163716 | orchestrator | 2026-02-02 03:44:30.163723 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-02 03:44:30.163730 | orchestrator | Monday 02 February 2026 03:43:24 +0000 (0:00:00.073) 0:01:11.460 ******* 2026-02-02 03:44:30.163736 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:44:30.163742 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:44:30.163748 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:44:30.163755 | orchestrator | 2026-02-02 03:44:30.163761 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-02 03:44:30.163767 | orchestrator | Monday 02 February 2026 03:44:12 +0000 (0:00:47.654) 0:01:59.114 ******* 2026-02-02 03:44:30.163773 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:44:30.163780 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:44:30.163786 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:44:30.163792 | orchestrator | 2026-02-02 03:44:30.163798 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-02 03:44:30.163804 | orchestrator | Monday 02 February 2026 03:44:22 +0000 (0:00:09.889) 0:02:09.003 ******* 2026-02-02 03:44:30.163811 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:44:30.163817 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:44:30.163823 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:44:30.163829 | orchestrator | 2026-02-02 03:44:30.163835 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-02 03:44:30.163842 | orchestrator | Monday 02 February 2026 03:44:29 +0000 (0:00:07.252) 0:02:16.256 ******* 2026-02-02 03:44:30.163854 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:45:17.946625 | orchestrator | 2026-02-02 03:45:17.946737 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-02 03:45:17.946751 | orchestrator | Monday 02 February 2026 03:44:30 +0000 (0:00:00.659) 0:02:16.915 ******* 2026-02-02 03:45:17.946759 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:45:17.946769 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:45:17.946776 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:45:17.946783 | orchestrator | 2026-02-02 03:45:17.946790 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-02 03:45:17.946797 | orchestrator | Monday 02 February 2026 03:44:30 +0000 (0:00:00.744) 0:02:17.660 ******* 2026-02-02 03:45:17.946804 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:45:17.946812 | orchestrator | 2026-02-02 03:45:17.946820 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-02 03:45:17.946827 | orchestrator | Monday 02 February 2026 03:44:33 +0000 (0:00:02.216) 0:02:19.876 ******* 2026-02-02 03:45:17.946834 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-02 03:45:17.946841 | orchestrator | 2026-02-02 03:45:17.946848 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-02 03:45:17.946855 | orchestrator | Monday 02 February 2026 03:44:43 +0000 (0:00:10.554) 0:02:30.431 ******* 2026-02-02 03:45:17.946862 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-02 03:45:17.946869 | orchestrator | 2026-02-02 03:45:17.946875 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-02 03:45:17.946882 | orchestrator | Monday 02 February 2026 03:45:06 +0000 (0:00:23.077) 0:02:53.509 ******* 2026-02-02 03:45:17.946889 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-02 03:45:17.946917 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-02 03:45:17.946924 | orchestrator | 2026-02-02 03:45:17.946931 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-02 03:45:17.946937 | orchestrator | Monday 02 February 2026 03:45:12 +0000 (0:00:05.842) 0:02:59.351 ******* 2026-02-02 03:45:17.946947 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:45:17.946957 | orchestrator | 2026-02-02 03:45:17.946967 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-02 03:45:17.946974 | orchestrator | Monday 02 February 2026 03:45:12 +0000 (0:00:00.144) 0:02:59.496 ******* 2026-02-02 03:45:17.946982 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:45:17.946993 | orchestrator | 2026-02-02 03:45:17.947004 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-02 03:45:17.947015 | orchestrator | Monday 02 February 2026 03:45:12 +0000 (0:00:00.125) 0:02:59.621 ******* 2026-02-02 03:45:17.947025 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:45:17.947035 | orchestrator | 2026-02-02 03:45:17.947062 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-02 03:45:17.947073 | orchestrator | Monday 02 February 2026 03:45:13 +0000 (0:00:00.141) 0:02:59.762 ******* 2026-02-02 03:45:17.947084 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:45:17.947094 | orchestrator | 2026-02-02 03:45:17.947103 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-02 03:45:17.947115 | orchestrator | Monday 02 February 2026 03:45:13 +0000 (0:00:00.413) 0:03:00.176 ******* 2026-02-02 03:45:17.947126 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:45:17.947136 | orchestrator | 2026-02-02 03:45:17.947147 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-02 03:45:17.947159 | orchestrator | Monday 02 February 2026 03:45:16 +0000 (0:00:03.391) 0:03:03.567 ******* 2026-02-02 03:45:17.947171 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:45:17.947184 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:45:17.947195 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:45:17.947206 | orchestrator | 2026-02-02 03:45:17.947218 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:45:17.947231 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-02 03:45:17.947244 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-02 03:45:17.947255 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-02 03:45:17.947267 | orchestrator | 2026-02-02 03:45:17.947279 | orchestrator | 2026-02-02 03:45:17.947292 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:45:17.947303 | orchestrator | Monday 02 February 2026 03:45:17 +0000 (0:00:00.712) 0:03:04.280 ******* 2026-02-02 03:45:17.947315 | orchestrator | =============================================================================== 2026-02-02 03:45:17.947326 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 47.65s 2026-02-02 03:45:17.947339 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.08s 2026-02-02 03:45:17.947350 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.30s 2026-02-02 03:45:17.947361 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.56s 2026-02-02 03:45:17.947373 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.89s 2026-02-02 03:45:17.947384 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.65s 2026-02-02 03:45:17.947396 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.79s 2026-02-02 03:45:17.947407 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.25s 2026-02-02 03:45:17.947430 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.84s 2026-02-02 03:45:17.947461 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.84s 2026-02-02 03:45:17.947474 | orchestrator | keystone : Creating default user role ----------------------------------- 3.39s 2026-02-02 03:45:17.947486 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.21s 2026-02-02 03:45:17.947497 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.05s 2026-02-02 03:45:17.947530 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.54s 2026-02-02 03:45:17.947542 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.23s 2026-02-02 03:45:17.947553 | orchestrator | keystone : Run key distribution ----------------------------------------- 2.22s 2026-02-02 03:45:17.947564 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.21s 2026-02-02 03:45:17.947575 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.15s 2026-02-02 03:45:17.947662 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.87s 2026-02-02 03:45:17.947675 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.56s 2026-02-02 03:45:20.607462 | orchestrator | 2026-02-02 03:45:20 | INFO  | Task 252959c2-9c54-47fa-abb4-bcebbf5bab3a (placement) was prepared for execution. 2026-02-02 03:45:20.607622 | orchestrator | 2026-02-02 03:45:20 | INFO  | It takes a moment until task 252959c2-9c54-47fa-abb4-bcebbf5bab3a (placement) has been started and output is visible here. 2026-02-02 03:45:55.704646 | orchestrator | 2026-02-02 03:45:55.704765 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 03:45:55.704783 | orchestrator | 2026-02-02 03:45:55.704796 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 03:45:55.704809 | orchestrator | Monday 02 February 2026 03:45:25 +0000 (0:00:00.295) 0:00:00.295 ******* 2026-02-02 03:45:55.704821 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:45:55.704834 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:45:55.704847 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:45:55.704859 | orchestrator | 2026-02-02 03:45:55.704870 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 03:45:55.704883 | orchestrator | Monday 02 February 2026 03:45:25 +0000 (0:00:00.329) 0:00:00.625 ******* 2026-02-02 03:45:55.704895 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-02 03:45:55.704907 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-02 03:45:55.704919 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-02 03:45:55.704930 | orchestrator | 2026-02-02 03:45:55.704957 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-02 03:45:55.704970 | orchestrator | 2026-02-02 03:45:55.704982 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-02 03:45:55.704993 | orchestrator | Monday 02 February 2026 03:45:25 +0000 (0:00:00.483) 0:00:01.108 ******* 2026-02-02 03:45:55.705005 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:45:55.705017 | orchestrator | 2026-02-02 03:45:55.705029 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-02 03:45:55.705041 | orchestrator | Monday 02 February 2026 03:45:26 +0000 (0:00:00.594) 0:00:01.702 ******* 2026-02-02 03:45:55.705052 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-02 03:45:55.705064 | orchestrator | 2026-02-02 03:45:55.705075 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-02 03:45:55.705086 | orchestrator | Monday 02 February 2026 03:45:30 +0000 (0:00:03.764) 0:00:05.467 ******* 2026-02-02 03:45:55.705098 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-02 03:45:55.705134 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-02 03:45:55.705146 | orchestrator | 2026-02-02 03:45:55.705158 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-02 03:45:55.705169 | orchestrator | Monday 02 February 2026 03:45:36 +0000 (0:00:06.427) 0:00:11.894 ******* 2026-02-02 03:45:55.705183 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-02 03:45:55.705195 | orchestrator | 2026-02-02 03:45:55.705208 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-02 03:45:55.705222 | orchestrator | Monday 02 February 2026 03:45:40 +0000 (0:00:03.644) 0:00:15.539 ******* 2026-02-02 03:45:55.705234 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 03:45:55.705246 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-02 03:45:55.705258 | orchestrator | 2026-02-02 03:45:55.705271 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-02 03:45:55.705285 | orchestrator | Monday 02 February 2026 03:45:44 +0000 (0:00:04.270) 0:00:19.810 ******* 2026-02-02 03:45:55.705298 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 03:45:55.705310 | orchestrator | 2026-02-02 03:45:55.705323 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-02 03:45:55.705335 | orchestrator | Monday 02 February 2026 03:45:47 +0000 (0:00:03.015) 0:00:22.825 ******* 2026-02-02 03:45:55.705348 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-02 03:45:55.705361 | orchestrator | 2026-02-02 03:45:55.705374 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-02 03:45:55.705386 | orchestrator | Monday 02 February 2026 03:45:51 +0000 (0:00:03.683) 0:00:26.509 ******* 2026-02-02 03:45:55.705399 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:45:55.705412 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:45:55.705424 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:45:55.705437 | orchestrator | 2026-02-02 03:45:55.705450 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-02 03:45:55.705463 | orchestrator | Monday 02 February 2026 03:45:51 +0000 (0:00:00.325) 0:00:26.834 ******* 2026-02-02 03:45:55.705478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:45:55.705517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:45:55.705539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:45:55.705551 | orchestrator | 2026-02-02 03:45:55.705585 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-02 03:45:55.705597 | orchestrator | Monday 02 February 2026 03:45:52 +0000 (0:00:00.816) 0:00:27.651 ******* 2026-02-02 03:45:55.705609 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:45:55.705621 | orchestrator | 2026-02-02 03:45:55.705632 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-02 03:45:55.705641 | orchestrator | Monday 02 February 2026 03:45:52 +0000 (0:00:00.365) 0:00:28.017 ******* 2026-02-02 03:45:55.705651 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:45:55.705663 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:45:55.705674 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:45:55.705686 | orchestrator | 2026-02-02 03:45:55.705697 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-02 03:45:55.705709 | orchestrator | Monday 02 February 2026 03:45:53 +0000 (0:00:00.317) 0:00:28.334 ******* 2026-02-02 03:45:55.705721 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:45:55.705733 | orchestrator | 2026-02-02 03:45:55.705744 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-02 03:45:55.705756 | orchestrator | Monday 02 February 2026 03:45:53 +0000 (0:00:00.650) 0:00:28.985 ******* 2026-02-02 03:45:55.705768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:45:55.705790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:45:58.691953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:45:58.692033 | orchestrator | 2026-02-02 03:45:58.692044 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-02 03:45:58.692051 | orchestrator | Monday 02 February 2026 03:45:55 +0000 (0:00:01.886) 0:00:30.871 ******* 2026-02-02 03:45:58.692060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-02 03:45:58.692067 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:45:58.692074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-02 03:45:58.692081 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:45:58.692088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-02 03:45:58.692111 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:45:58.692117 | orchestrator | 2026-02-02 03:45:58.692124 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-02 03:45:58.692143 | orchestrator | Monday 02 February 2026 03:45:56 +0000 (0:00:00.547) 0:00:31.418 ******* 2026-02-02 03:45:58.692155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-02 03:45:58.692163 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:45:58.692169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-02 03:45:58.692176 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:45:58.692182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-02 03:45:58.692189 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:45:58.692195 | orchestrator | 2026-02-02 03:45:58.692201 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-02 03:45:58.692211 | orchestrator | Monday 02 February 2026 03:45:57 +0000 (0:00:00.779) 0:00:32.198 ******* 2026-02-02 03:45:58.692222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:45:58.692252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:46:05.792046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:46:05.792176 | orchestrator | 2026-02-02 03:46:05.792199 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-02 03:46:05.792213 | orchestrator | Monday 02 February 2026 03:45:58 +0000 (0:00:01.668) 0:00:33.866 ******* 2026-02-02 03:46:05.792225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:46:05.792239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:46:05.792294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:46:05.792310 | orchestrator | 2026-02-02 03:46:05.792321 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-02 03:46:05.792331 | orchestrator | Monday 02 February 2026 03:46:01 +0000 (0:00:02.399) 0:00:36.266 ******* 2026-02-02 03:46:05.792361 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-02 03:46:05.792374 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-02 03:46:05.792385 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-02 03:46:05.792396 | orchestrator | 2026-02-02 03:46:05.792407 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-02 03:46:05.792418 | orchestrator | Monday 02 February 2026 03:46:02 +0000 (0:00:01.458) 0:00:37.725 ******* 2026-02-02 03:46:05.792428 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:46:05.792440 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:46:05.792450 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:46:05.792461 | orchestrator | 2026-02-02 03:46:05.792473 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-02 03:46:05.792483 | orchestrator | Monday 02 February 2026 03:46:03 +0000 (0:00:01.295) 0:00:39.020 ******* 2026-02-02 03:46:05.792494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-02 03:46:05.792506 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:46:05.792517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-02 03:46:05.792538 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:46:05.792551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-02 03:46:05.792564 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:46:05.792607 | orchestrator | 2026-02-02 03:46:05.792619 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-02 03:46:05.792638 | orchestrator | Monday 02 February 2026 03:46:04 +0000 (0:00:00.863) 0:00:39.883 ******* 2026-02-02 03:46:05.792663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:46:34.057581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:46:34.057768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-02 03:46:34.057796 | orchestrator | 2026-02-02 03:46:34.057817 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-02 03:46:34.057837 | orchestrator | Monday 02 February 2026 03:46:05 +0000 (0:00:01.086) 0:00:40.969 ******* 2026-02-02 03:46:34.057855 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:46:34.057875 | orchestrator | 2026-02-02 03:46:34.057895 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-02 03:46:34.057915 | orchestrator | Monday 02 February 2026 03:46:07 +0000 (0:00:02.038) 0:00:43.007 ******* 2026-02-02 03:46:34.057934 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:46:34.057948 | orchestrator | 2026-02-02 03:46:34.057959 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-02 03:46:34.057970 | orchestrator | Monday 02 February 2026 03:46:09 +0000 (0:00:02.108) 0:00:45.116 ******* 2026-02-02 03:46:34.057981 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:46:34.057992 | orchestrator | 2026-02-02 03:46:34.058003 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-02 03:46:34.058014 | orchestrator | Monday 02 February 2026 03:46:22 +0000 (0:00:12.954) 0:00:58.071 ******* 2026-02-02 03:46:34.058087 | orchestrator | 2026-02-02 03:46:34.058102 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-02 03:46:34.058115 | orchestrator | Monday 02 February 2026 03:46:22 +0000 (0:00:00.070) 0:00:58.141 ******* 2026-02-02 03:46:34.058129 | orchestrator | 2026-02-02 03:46:34.058142 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-02 03:46:34.058155 | orchestrator | Monday 02 February 2026 03:46:23 +0000 (0:00:00.066) 0:00:58.207 ******* 2026-02-02 03:46:34.058169 | orchestrator | 2026-02-02 03:46:34.058182 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-02 03:46:34.058196 | orchestrator | Monday 02 February 2026 03:46:23 +0000 (0:00:00.086) 0:00:58.294 ******* 2026-02-02 03:46:34.058209 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:46:34.058237 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:46:34.058250 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:46:34.058263 | orchestrator | 2026-02-02 03:46:34.058277 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:46:34.058291 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 03:46:34.058305 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 03:46:34.058319 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 03:46:34.058332 | orchestrator | 2026-02-02 03:46:34.058346 | orchestrator | 2026-02-02 03:46:34.058359 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:46:34.058372 | orchestrator | Monday 02 February 2026 03:46:33 +0000 (0:00:10.562) 0:01:08.857 ******* 2026-02-02 03:46:34.058396 | orchestrator | =============================================================================== 2026-02-02 03:46:34.058409 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.95s 2026-02-02 03:46:34.058441 | orchestrator | placement : Restart placement-api container ---------------------------- 10.56s 2026-02-02 03:46:34.058455 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.43s 2026-02-02 03:46:34.058469 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.27s 2026-02-02 03:46:34.058480 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.76s 2026-02-02 03:46:34.058490 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.68s 2026-02-02 03:46:34.058501 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.64s 2026-02-02 03:46:34.058512 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.02s 2026-02-02 03:46:34.058523 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.40s 2026-02-02 03:46:34.058534 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.11s 2026-02-02 03:46:34.058545 | orchestrator | placement : Creating placement databases -------------------------------- 2.04s 2026-02-02 03:46:34.058555 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.89s 2026-02-02 03:46:34.058566 | orchestrator | placement : Copying over config.json files for services ----------------- 1.67s 2026-02-02 03:46:34.058576 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.46s 2026-02-02 03:46:34.058587 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.30s 2026-02-02 03:46:34.058598 | orchestrator | placement : Check placement containers ---------------------------------- 1.09s 2026-02-02 03:46:34.058678 | orchestrator | placement : Copying over existing policy file --------------------------- 0.86s 2026-02-02 03:46:34.058698 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.82s 2026-02-02 03:46:34.058726 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.78s 2026-02-02 03:46:34.058745 | orchestrator | placement : include_tasks ----------------------------------------------- 0.65s 2026-02-02 03:46:36.594099 | orchestrator | 2026-02-02 03:46:36 | INFO  | Task d4f9ddb5-188f-4685-9f06-7004dc69a92b (neutron) was prepared for execution. 2026-02-02 03:46:36.594225 | orchestrator | 2026-02-02 03:46:36 | INFO  | It takes a moment until task d4f9ddb5-188f-4685-9f06-7004dc69a92b (neutron) has been started and output is visible here. 2026-02-02 03:47:24.656742 | orchestrator | 2026-02-02 03:47:24.656823 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 03:47:24.656832 | orchestrator | 2026-02-02 03:47:24.656837 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 03:47:24.656844 | orchestrator | Monday 02 February 2026 03:46:41 +0000 (0:00:00.305) 0:00:00.305 ******* 2026-02-02 03:47:24.656849 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:47:24.656856 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:47:24.656861 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:47:24.656867 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:47:24.656872 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:47:24.656877 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:47:24.656882 | orchestrator | 2026-02-02 03:47:24.656887 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 03:47:24.656893 | orchestrator | Monday 02 February 2026 03:46:41 +0000 (0:00:00.740) 0:00:01.046 ******* 2026-02-02 03:47:24.656898 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-02 03:47:24.656903 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-02 03:47:24.656909 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-02 03:47:24.656914 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-02 03:47:24.656919 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-02 03:47:24.656938 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-02 03:47:24.656944 | orchestrator | 2026-02-02 03:47:24.656949 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-02 03:47:24.656954 | orchestrator | 2026-02-02 03:47:24.656959 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-02 03:47:24.656964 | orchestrator | Monday 02 February 2026 03:46:42 +0000 (0:00:00.642) 0:00:01.688 ******* 2026-02-02 03:47:24.656977 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:47:24.656983 | orchestrator | 2026-02-02 03:47:24.656988 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-02 03:47:24.656993 | orchestrator | Monday 02 February 2026 03:46:43 +0000 (0:00:01.376) 0:00:03.065 ******* 2026-02-02 03:47:24.656999 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:47:24.657004 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:47:24.657009 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:47:24.657014 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:47:24.657019 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:47:24.657025 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:47:24.657030 | orchestrator | 2026-02-02 03:47:24.657035 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-02 03:47:24.657040 | orchestrator | Monday 02 February 2026 03:46:45 +0000 (0:00:01.381) 0:00:04.446 ******* 2026-02-02 03:47:24.657045 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:47:24.657050 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:47:24.657055 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:47:24.657060 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:47:24.657065 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:47:24.657070 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:47:24.657075 | orchestrator | 2026-02-02 03:47:24.657080 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-02 03:47:24.657085 | orchestrator | Monday 02 February 2026 03:46:46 +0000 (0:00:01.037) 0:00:05.483 ******* 2026-02-02 03:47:24.657091 | orchestrator | ok: [testbed-node-0] => { 2026-02-02 03:47:24.657097 | orchestrator |  "changed": false, 2026-02-02 03:47:24.657102 | orchestrator |  "msg": "All assertions passed" 2026-02-02 03:47:24.657107 | orchestrator | } 2026-02-02 03:47:24.657113 | orchestrator | ok: [testbed-node-1] => { 2026-02-02 03:47:24.657118 | orchestrator |  "changed": false, 2026-02-02 03:47:24.657123 | orchestrator |  "msg": "All assertions passed" 2026-02-02 03:47:24.657128 | orchestrator | } 2026-02-02 03:47:24.657133 | orchestrator | ok: [testbed-node-2] => { 2026-02-02 03:47:24.657138 | orchestrator |  "changed": false, 2026-02-02 03:47:24.657143 | orchestrator |  "msg": "All assertions passed" 2026-02-02 03:47:24.657148 | orchestrator | } 2026-02-02 03:47:24.657153 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 03:47:24.657158 | orchestrator |  "changed": false, 2026-02-02 03:47:24.657163 | orchestrator |  "msg": "All assertions passed" 2026-02-02 03:47:24.657168 | orchestrator | } 2026-02-02 03:47:24.657173 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 03:47:24.657178 | orchestrator |  "changed": false, 2026-02-02 03:47:24.657184 | orchestrator |  "msg": "All assertions passed" 2026-02-02 03:47:24.657189 | orchestrator | } 2026-02-02 03:47:24.657194 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 03:47:24.657199 | orchestrator |  "changed": false, 2026-02-02 03:47:24.657204 | orchestrator |  "msg": "All assertions passed" 2026-02-02 03:47:24.657209 | orchestrator | } 2026-02-02 03:47:24.657214 | orchestrator | 2026-02-02 03:47:24.657220 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-02 03:47:24.657225 | orchestrator | Monday 02 February 2026 03:46:47 +0000 (0:00:00.912) 0:00:06.396 ******* 2026-02-02 03:47:24.657230 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:47:24.657235 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:47:24.657240 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:47:24.657249 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:47:24.657254 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:47:24.657259 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:47:24.657264 | orchestrator | 2026-02-02 03:47:24.657270 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-02 03:47:24.657275 | orchestrator | Monday 02 February 2026 03:46:48 +0000 (0:00:00.715) 0:00:07.111 ******* 2026-02-02 03:47:24.657280 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-02 03:47:24.657285 | orchestrator | 2026-02-02 03:47:24.657290 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-02 03:47:24.657296 | orchestrator | Monday 02 February 2026 03:46:51 +0000 (0:00:03.364) 0:00:10.476 ******* 2026-02-02 03:47:24.657302 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-02 03:47:24.657309 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-02 03:47:24.657314 | orchestrator | 2026-02-02 03:47:24.657331 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-02 03:47:24.657337 | orchestrator | Monday 02 February 2026 03:46:57 +0000 (0:00:05.893) 0:00:16.370 ******* 2026-02-02 03:47:24.657343 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 03:47:24.657349 | orchestrator | 2026-02-02 03:47:24.657355 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-02 03:47:24.657361 | orchestrator | Monday 02 February 2026 03:47:00 +0000 (0:00:03.032) 0:00:19.403 ******* 2026-02-02 03:47:24.657367 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 03:47:24.657374 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-02 03:47:24.657380 | orchestrator | 2026-02-02 03:47:24.657385 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-02 03:47:24.657391 | orchestrator | Monday 02 February 2026 03:47:04 +0000 (0:00:03.976) 0:00:23.380 ******* 2026-02-02 03:47:24.657397 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 03:47:24.657404 | orchestrator | 2026-02-02 03:47:24.657410 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-02 03:47:24.657416 | orchestrator | Monday 02 February 2026 03:47:07 +0000 (0:00:03.234) 0:00:26.614 ******* 2026-02-02 03:47:24.657422 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-02 03:47:24.657427 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-02 03:47:24.657432 | orchestrator | 2026-02-02 03:47:24.657438 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-02 03:47:24.657443 | orchestrator | Monday 02 February 2026 03:47:14 +0000 (0:00:07.380) 0:00:33.995 ******* 2026-02-02 03:47:24.657448 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:47:24.657453 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:47:24.657458 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:47:24.657463 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:47:24.657468 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:47:24.657476 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:47:24.657482 | orchestrator | 2026-02-02 03:47:24.657487 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-02 03:47:24.657492 | orchestrator | Monday 02 February 2026 03:47:15 +0000 (0:00:00.832) 0:00:34.827 ******* 2026-02-02 03:47:24.657497 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:47:24.657502 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:47:24.657507 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:47:24.657512 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:47:24.657517 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:47:24.657522 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:47:24.657527 | orchestrator | 2026-02-02 03:47:24.657532 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-02 03:47:24.657538 | orchestrator | Monday 02 February 2026 03:47:18 +0000 (0:00:02.311) 0:00:37.139 ******* 2026-02-02 03:47:24.657549 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:47:24.657554 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:47:24.657559 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:47:24.657564 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:47:24.657569 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:47:24.657574 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:47:24.657579 | orchestrator | 2026-02-02 03:47:24.657584 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-02 03:47:24.657589 | orchestrator | Monday 02 February 2026 03:47:19 +0000 (0:00:01.234) 0:00:38.373 ******* 2026-02-02 03:47:24.657594 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:47:24.657600 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:47:24.657605 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:47:24.657610 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:47:24.657615 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:47:24.657620 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:47:24.657625 | orchestrator | 2026-02-02 03:47:24.657630 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-02 03:47:24.657635 | orchestrator | Monday 02 February 2026 03:47:21 +0000 (0:00:02.375) 0:00:40.749 ******* 2026-02-02 03:47:24.657642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:47:24.657655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:47:30.708830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:47:30.709032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:47:30.709067 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:47:30.709088 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:47:30.709107 | orchestrator | 2026-02-02 03:47:30.709130 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-02 03:47:30.709150 | orchestrator | Monday 02 February 2026 03:47:24 +0000 (0:00:02.996) 0:00:43.745 ******* 2026-02-02 03:47:30.709168 | orchestrator | [WARNING]: Skipped 2026-02-02 03:47:30.709188 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-02 03:47:30.709206 | orchestrator | due to this access issue: 2026-02-02 03:47:30.709226 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-02 03:47:30.709245 | orchestrator | a directory 2026-02-02 03:47:30.709262 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 03:47:30.709281 | orchestrator | 2026-02-02 03:47:30.709300 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-02 03:47:30.709319 | orchestrator | Monday 02 February 2026 03:47:25 +0000 (0:00:00.909) 0:00:44.655 ******* 2026-02-02 03:47:30.709339 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:47:30.709360 | orchestrator | 2026-02-02 03:47:30.709379 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-02 03:47:30.709423 | orchestrator | Monday 02 February 2026 03:47:26 +0000 (0:00:01.388) 0:00:46.043 ******* 2026-02-02 03:47:30.709456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:47:30.709493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:47:30.709515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:47:30.709535 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:47:30.709568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:47:35.675789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:47:35.675930 | orchestrator | 2026-02-02 03:47:35.675953 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-02 03:47:35.675968 | orchestrator | Monday 02 February 2026 03:47:30 +0000 (0:00:03.752) 0:00:49.795 ******* 2026-02-02 03:47:35.675982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:47:35.675995 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:47:35.676008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:47:35.676020 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:47:35.676032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:47:35.676043 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:47:35.676075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:47:35.676115 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:47:35.676136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:47:35.676148 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:47:35.676159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:47:35.676170 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:47:35.676185 | orchestrator | 2026-02-02 03:47:35.676204 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-02 03:47:35.676223 | orchestrator | Monday 02 February 2026 03:47:32 +0000 (0:00:02.114) 0:00:51.909 ******* 2026-02-02 03:47:35.676243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:47:35.676262 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:47:35.676296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:47:41.508069 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:47:41.508202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:47:41.508225 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:47:41.508240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:47:41.508264 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:47:41.508285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:47:41.508315 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:47:41.508336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:47:41.508384 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:47:41.508404 | orchestrator | 2026-02-02 03:47:41.508422 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-02 03:47:41.508443 | orchestrator | Monday 02 February 2026 03:47:35 +0000 (0:00:02.851) 0:00:54.760 ******* 2026-02-02 03:47:41.508462 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:47:41.508480 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:47:41.508497 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:47:41.508515 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:47:41.508533 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:47:41.508551 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:47:41.508570 | orchestrator | 2026-02-02 03:47:41.508588 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-02 03:47:41.508608 | orchestrator | Monday 02 February 2026 03:47:38 +0000 (0:00:02.527) 0:00:57.288 ******* 2026-02-02 03:47:41.508620 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:47:41.508633 | orchestrator | 2026-02-02 03:47:41.508646 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-02 03:47:41.508710 | orchestrator | Monday 02 February 2026 03:47:38 +0000 (0:00:00.141) 0:00:57.430 ******* 2026-02-02 03:47:41.508725 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:47:41.508738 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:47:41.508751 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:47:41.508764 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:47:41.508777 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:47:41.508789 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:47:41.508802 | orchestrator | 2026-02-02 03:47:41.508814 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-02 03:47:41.508827 | orchestrator | Monday 02 February 2026 03:47:38 +0000 (0:00:00.655) 0:00:58.085 ******* 2026-02-02 03:47:41.508850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:47:41.508864 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:47:41.508878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:47:41.508902 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:47:41.508916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:47:41.508930 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:47:41.508943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:47:41.508956 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:47:41.508984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:47:50.612238 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:47:50.612331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:47:50.612344 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:47:50.612352 | orchestrator | 2026-02-02 03:47:50.612361 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-02 03:47:50.612370 | orchestrator | Monday 02 February 2026 03:47:41 +0000 (0:00:02.503) 0:01:00.588 ******* 2026-02-02 03:47:50.612379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:47:50.612408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:47:50.612437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:47:50.612481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:47:50.612492 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:47:50.612505 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:47:50.612513 | orchestrator | 2026-02-02 03:47:50.612521 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-02 03:47:50.612528 | orchestrator | Monday 02 February 2026 03:47:44 +0000 (0:00:03.030) 0:01:03.619 ******* 2026-02-02 03:47:50.612536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:47:50.612544 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:47:50.612588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:47:55.850386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:47:55.850538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:47:55.850564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:47:55.850580 | orchestrator | 2026-02-02 03:47:55.850597 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-02 03:47:55.850616 | orchestrator | Monday 02 February 2026 03:47:50 +0000 (0:00:06.079) 0:01:09.699 ******* 2026-02-02 03:47:55.850632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:47:55.850667 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:47:55.850804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:47:55.850832 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:47:55.850844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:47:55.850854 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:47:55.850865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:47:55.850875 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:47:55.850885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:47:55.850895 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:47:55.850912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:47:55.850924 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:47:55.850937 | orchestrator | 2026-02-02 03:47:55.850949 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-02 03:47:55.850966 | orchestrator | Monday 02 February 2026 03:47:52 +0000 (0:00:02.092) 0:01:11.792 ******* 2026-02-02 03:47:55.850977 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:47:55.850988 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:47:55.850999 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:47:55.851010 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:47:55.851020 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:47:55.851038 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:48:15.605199 | orchestrator | 2026-02-02 03:48:15.605280 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-02 03:48:15.605287 | orchestrator | Monday 02 February 2026 03:47:55 +0000 (0:00:03.140) 0:01:14.932 ******* 2026-02-02 03:48:15.605294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:48:15.605301 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:15.605307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:48:15.605311 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:15.605315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:48:15.605319 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:15.605324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:48:15.605366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:48:15.605371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:48:15.605375 | orchestrator | 2026-02-02 03:48:15.605379 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-02 03:48:15.605383 | orchestrator | Monday 02 February 2026 03:47:59 +0000 (0:00:03.541) 0:01:18.474 ******* 2026-02-02 03:48:15.605387 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:15.605391 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:15.605395 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:15.605398 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:15.605402 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:15.605406 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:15.605410 | orchestrator | 2026-02-02 03:48:15.605414 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-02 03:48:15.605417 | orchestrator | Monday 02 February 2026 03:48:01 +0000 (0:00:02.476) 0:01:20.951 ******* 2026-02-02 03:48:15.605421 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:15.605425 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:15.605429 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:15.605433 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:15.605436 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:15.605440 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:15.605444 | orchestrator | 2026-02-02 03:48:15.605448 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-02 03:48:15.605451 | orchestrator | Monday 02 February 2026 03:48:04 +0000 (0:00:02.232) 0:01:23.183 ******* 2026-02-02 03:48:15.605455 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:15.605459 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:15.605463 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:15.605467 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:15.605471 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:15.605474 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:15.605478 | orchestrator | 2026-02-02 03:48:15.605482 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-02 03:48:15.605489 | orchestrator | Monday 02 February 2026 03:48:06 +0000 (0:00:02.259) 0:01:25.443 ******* 2026-02-02 03:48:15.605493 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:15.605497 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:15.605501 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:15.605504 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:15.605508 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:15.605512 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:15.605516 | orchestrator | 2026-02-02 03:48:15.605519 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-02 03:48:15.605523 | orchestrator | Monday 02 February 2026 03:48:08 +0000 (0:00:02.357) 0:01:27.801 ******* 2026-02-02 03:48:15.605527 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:15.605530 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:15.605534 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:15.605538 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:15.605542 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:15.605545 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:15.605549 | orchestrator | 2026-02-02 03:48:15.605553 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-02 03:48:15.605556 | orchestrator | Monday 02 February 2026 03:48:10 +0000 (0:00:02.282) 0:01:30.084 ******* 2026-02-02 03:48:15.605560 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:15.605564 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:15.605568 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:15.605571 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:15.605578 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:15.605582 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:15.605586 | orchestrator | 2026-02-02 03:48:15.605589 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-02 03:48:15.605593 | orchestrator | Monday 02 February 2026 03:48:13 +0000 (0:00:02.217) 0:01:32.301 ******* 2026-02-02 03:48:15.605597 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-02 03:48:15.605602 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:15.605606 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-02 03:48:15.605609 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:15.605613 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-02 03:48:15.605620 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:20.074870 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-02 03:48:20.074959 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:20.074970 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-02 03:48:20.074977 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:20.074984 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-02 03:48:20.074991 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:20.074997 | orchestrator | 2026-02-02 03:48:20.075004 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-02 03:48:20.075011 | orchestrator | Monday 02 February 2026 03:48:15 +0000 (0:00:02.384) 0:01:34.686 ******* 2026-02-02 03:48:20.075020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:48:20.075052 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:20.075059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:48:20.075066 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:20.075073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:48:20.075080 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:20.075111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:48:20.075119 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:20.075126 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:48:20.075138 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:20.075144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:48:20.075151 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:20.075157 | orchestrator | 2026-02-02 03:48:20.075163 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-02 03:48:20.075170 | orchestrator | Monday 02 February 2026 03:48:17 +0000 (0:00:02.298) 0:01:36.984 ******* 2026-02-02 03:48:20.075176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:48:20.075183 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:20.075193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:48:20.075200 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:20.075212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:48:47.002651 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:47.002733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:48:47.002797 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:47.002805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:48:47.002810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:48:47.002815 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:47.002819 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:47.002823 | orchestrator | 2026-02-02 03:48:47.002828 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-02 03:48:47.002834 | orchestrator | Monday 02 February 2026 03:48:20 +0000 (0:00:02.178) 0:01:39.163 ******* 2026-02-02 03:48:47.002838 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:47.002842 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:47.002845 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:47.002849 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:47.002853 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:47.002857 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:47.002861 | orchestrator | 2026-02-02 03:48:47.002876 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-02 03:48:47.002880 | orchestrator | Monday 02 February 2026 03:48:22 +0000 (0:00:02.163) 0:01:41.326 ******* 2026-02-02 03:48:47.002884 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:47.002888 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:47.002892 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:47.002896 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:48:47.002900 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:48:47.002903 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:48:47.002907 | orchestrator | 2026-02-02 03:48:47.002911 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-02 03:48:47.002930 | orchestrator | Monday 02 February 2026 03:48:26 +0000 (0:00:04.048) 0:01:45.375 ******* 2026-02-02 03:48:47.002934 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:47.002938 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:47.002942 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:47.002946 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:47.002949 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:47.002953 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:47.002957 | orchestrator | 2026-02-02 03:48:47.002961 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-02 03:48:47.002964 | orchestrator | Monday 02 February 2026 03:48:28 +0000 (0:00:02.347) 0:01:47.722 ******* 2026-02-02 03:48:47.002968 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:47.002972 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:47.002976 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:47.002980 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:47.002983 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:47.002987 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:47.002991 | orchestrator | 2026-02-02 03:48:47.002995 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-02 03:48:47.003010 | orchestrator | Monday 02 February 2026 03:48:30 +0000 (0:00:02.272) 0:01:49.995 ******* 2026-02-02 03:48:47.003014 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:47.003018 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:47.003021 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:47.003025 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:47.003029 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:47.003033 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:47.003036 | orchestrator | 2026-02-02 03:48:47.003040 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-02 03:48:47.003044 | orchestrator | Monday 02 February 2026 03:48:33 +0000 (0:00:02.285) 0:01:52.281 ******* 2026-02-02 03:48:47.003048 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:47.003051 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:47.003055 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:47.003059 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:47.003062 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:47.003066 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:47.003070 | orchestrator | 2026-02-02 03:48:47.003074 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-02 03:48:47.003077 | orchestrator | Monday 02 February 2026 03:48:35 +0000 (0:00:02.329) 0:01:54.611 ******* 2026-02-02 03:48:47.003081 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:47.003085 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:47.003089 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:47.003093 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:47.003097 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:47.003100 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:47.003104 | orchestrator | 2026-02-02 03:48:47.003108 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-02 03:48:47.003112 | orchestrator | Monday 02 February 2026 03:48:37 +0000 (0:00:02.326) 0:01:56.937 ******* 2026-02-02 03:48:47.003116 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:47.003119 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:47.003123 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:47.003127 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:47.003130 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:47.003134 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:47.003138 | orchestrator | 2026-02-02 03:48:47.003142 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-02 03:48:47.003145 | orchestrator | Monday 02 February 2026 03:48:39 +0000 (0:00:02.142) 0:01:59.080 ******* 2026-02-02 03:48:47.003149 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:47.003157 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:47.003163 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:47.003169 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:47.003175 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:47.003182 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:47.003188 | orchestrator | 2026-02-02 03:48:47.003194 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-02 03:48:47.003200 | orchestrator | Monday 02 February 2026 03:48:42 +0000 (0:00:02.334) 0:02:01.414 ******* 2026-02-02 03:48:47.003206 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-02 03:48:47.003214 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:47.003220 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-02 03:48:47.003227 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:47.003234 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-02 03:48:47.003241 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:47.003248 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-02 03:48:47.003254 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:47.003262 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-02 03:48:47.003267 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:47.003274 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-02 03:48:47.003287 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:47.003298 | orchestrator | 2026-02-02 03:48:47.003304 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-02 03:48:47.003310 | orchestrator | Monday 02 February 2026 03:48:44 +0000 (0:00:02.443) 0:02:03.858 ******* 2026-02-02 03:48:47.003317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:48:47.003324 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:48:47.003337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:48:49.837256 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:48:49.837409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-02 03:48:49.837432 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:48:49.837445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:48:49.837456 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:48:49.837479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:48:49.837491 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:48:49.837512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 03:48:49.837531 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:48:49.837547 | orchestrator | 2026-02-02 03:48:49.837563 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-02 03:48:49.837584 | orchestrator | Monday 02 February 2026 03:48:46 +0000 (0:00:02.234) 0:02:06.092 ******* 2026-02-02 03:48:49.837629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:48:49.837659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:48:49.837677 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:48:49.837702 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:48:49.837720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-02 03:48:49.837791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-02 03:51:01.268725 | orchestrator | 2026-02-02 03:51:01.268949 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-02 03:51:01.268981 | orchestrator | Monday 02 February 2026 03:48:49 +0000 (0:00:02.833) 0:02:08.925 ******* 2026-02-02 03:51:01.268999 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:51:01.269016 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:51:01.269051 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:51:01.269065 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:51:01.269096 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:51:01.269113 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:51:01.269129 | orchestrator | 2026-02-02 03:51:01.269144 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-02 03:51:01.269160 | orchestrator | Monday 02 February 2026 03:48:50 +0000 (0:00:00.593) 0:02:09.519 ******* 2026-02-02 03:51:01.269177 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:51:01.269192 | orchestrator | 2026-02-02 03:51:01.269210 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-02 03:51:01.269226 | orchestrator | Monday 02 February 2026 03:48:52 +0000 (0:00:02.482) 0:02:12.001 ******* 2026-02-02 03:51:01.269244 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:51:01.269262 | orchestrator | 2026-02-02 03:51:01.269280 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-02 03:51:01.269298 | orchestrator | Monday 02 February 2026 03:48:54 +0000 (0:00:02.073) 0:02:14.074 ******* 2026-02-02 03:51:01.269316 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:51:01.269333 | orchestrator | 2026-02-02 03:51:01.269351 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-02 03:51:01.269369 | orchestrator | Monday 02 February 2026 03:49:33 +0000 (0:00:38.784) 0:02:52.859 ******* 2026-02-02 03:51:01.269387 | orchestrator | 2026-02-02 03:51:01.269405 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-02 03:51:01.269422 | orchestrator | Monday 02 February 2026 03:49:33 +0000 (0:00:00.073) 0:02:52.932 ******* 2026-02-02 03:51:01.269439 | orchestrator | 2026-02-02 03:51:01.269457 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-02 03:51:01.269475 | orchestrator | Monday 02 February 2026 03:49:33 +0000 (0:00:00.081) 0:02:53.014 ******* 2026-02-02 03:51:01.269493 | orchestrator | 2026-02-02 03:51:01.269510 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-02 03:51:01.269527 | orchestrator | Monday 02 February 2026 03:49:33 +0000 (0:00:00.072) 0:02:53.086 ******* 2026-02-02 03:51:01.269544 | orchestrator | 2026-02-02 03:51:01.269583 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-02 03:51:01.269603 | orchestrator | Monday 02 February 2026 03:49:34 +0000 (0:00:00.074) 0:02:53.161 ******* 2026-02-02 03:51:01.269620 | orchestrator | 2026-02-02 03:51:01.269635 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-02 03:51:01.269652 | orchestrator | Monday 02 February 2026 03:49:34 +0000 (0:00:00.071) 0:02:53.232 ******* 2026-02-02 03:51:01.269669 | orchestrator | 2026-02-02 03:51:01.269686 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-02 03:51:01.269704 | orchestrator | Monday 02 February 2026 03:49:34 +0000 (0:00:00.076) 0:02:53.309 ******* 2026-02-02 03:51:01.269838 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:51:01.269886 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:51:01.269897 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:51:01.269906 | orchestrator | 2026-02-02 03:51:01.269916 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-02 03:51:01.269926 | orchestrator | Monday 02 February 2026 03:49:59 +0000 (0:00:24.919) 0:03:18.229 ******* 2026-02-02 03:51:01.269936 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:51:01.269946 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:51:01.269956 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:51:01.269965 | orchestrator | 2026-02-02 03:51:01.269975 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:51:01.269986 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 03:51:01.269997 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-02 03:51:01.270008 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-02 03:51:01.270073 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 03:51:01.270084 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 03:51:01.270094 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-02 03:51:01.270104 | orchestrator | 2026-02-02 03:51:01.270114 | orchestrator | 2026-02-02 03:51:01.270124 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:51:01.270133 | orchestrator | Monday 02 February 2026 03:51:00 +0000 (0:01:01.590) 0:04:19.819 ******* 2026-02-02 03:51:01.270143 | orchestrator | =============================================================================== 2026-02-02 03:51:01.270153 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 61.59s 2026-02-02 03:51:01.270163 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 38.78s 2026-02-02 03:51:01.270172 | orchestrator | neutron : Restart neutron-server container ----------------------------- 24.92s 2026-02-02 03:51:01.270208 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.38s 2026-02-02 03:51:01.270218 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.08s 2026-02-02 03:51:01.270228 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.89s 2026-02-02 03:51:01.270237 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.05s 2026-02-02 03:51:01.270265 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.98s 2026-02-02 03:51:01.270275 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.75s 2026-02-02 03:51:01.270285 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.54s 2026-02-02 03:51:01.270295 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.36s 2026-02-02 03:51:01.270304 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.23s 2026-02-02 03:51:01.270314 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.14s 2026-02-02 03:51:01.270323 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.03s 2026-02-02 03:51:01.270333 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.03s 2026-02-02 03:51:01.270343 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.00s 2026-02-02 03:51:01.270362 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.85s 2026-02-02 03:51:01.270372 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.83s 2026-02-02 03:51:01.270382 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 2.53s 2026-02-02 03:51:01.270392 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.50s 2026-02-02 03:51:03.882441 | orchestrator | 2026-02-02 03:51:03 | INFO  | Task c1d5d6e4-1f1e-4ecc-ab17-7044e048a66f (nova) was prepared for execution. 2026-02-02 03:51:03.882541 | orchestrator | 2026-02-02 03:51:03 | INFO  | It takes a moment until task c1d5d6e4-1f1e-4ecc-ab17-7044e048a66f (nova) has been started and output is visible here. 2026-02-02 03:52:55.438667 | orchestrator | 2026-02-02 03:52:55.438780 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 03:52:55.438791 | orchestrator | 2026-02-02 03:52:55.438797 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-02 03:52:55.438802 | orchestrator | Monday 02 February 2026 03:51:08 +0000 (0:00:00.294) 0:00:00.294 ******* 2026-02-02 03:52:55.438807 | orchestrator | changed: [testbed-manager] 2026-02-02 03:52:55.438813 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:52:55.438818 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:52:55.438823 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:52:55.438827 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:52:55.438832 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:52:55.438837 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:52:55.438841 | orchestrator | 2026-02-02 03:52:55.438846 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 03:52:55.438851 | orchestrator | Monday 02 February 2026 03:51:09 +0000 (0:00:00.873) 0:00:01.167 ******* 2026-02-02 03:52:55.438855 | orchestrator | changed: [testbed-manager] 2026-02-02 03:52:55.438861 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:52:55.438869 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:52:55.438876 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:52:55.438885 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:52:55.438890 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:52:55.438895 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:52:55.438900 | orchestrator | 2026-02-02 03:52:55.438905 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 03:52:55.438909 | orchestrator | Monday 02 February 2026 03:51:10 +0000 (0:00:00.898) 0:00:02.065 ******* 2026-02-02 03:52:55.438914 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-02 03:52:55.438920 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-02 03:52:55.438994 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-02 03:52:55.439005 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-02 03:52:55.439014 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-02 03:52:55.439019 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-02 03:52:55.439024 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-02 03:52:55.439028 | orchestrator | 2026-02-02 03:52:55.439033 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-02 03:52:55.439038 | orchestrator | 2026-02-02 03:52:55.439042 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-02 03:52:55.439047 | orchestrator | Monday 02 February 2026 03:51:11 +0000 (0:00:00.736) 0:00:02.802 ******* 2026-02-02 03:52:55.439052 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:52:55.439056 | orchestrator | 2026-02-02 03:52:55.439061 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-02 03:52:55.439065 | orchestrator | Monday 02 February 2026 03:51:11 +0000 (0:00:00.818) 0:00:03.621 ******* 2026-02-02 03:52:55.439071 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-02 03:52:55.439093 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-02 03:52:55.439098 | orchestrator | 2026-02-02 03:52:55.439103 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-02 03:52:55.439107 | orchestrator | Monday 02 February 2026 03:51:15 +0000 (0:00:04.035) 0:00:07.657 ******* 2026-02-02 03:52:55.439114 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 03:52:55.439122 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 03:52:55.439129 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:52:55.439137 | orchestrator | 2026-02-02 03:52:55.439144 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-02 03:52:55.439150 | orchestrator | Monday 02 February 2026 03:51:19 +0000 (0:00:03.847) 0:00:11.505 ******* 2026-02-02 03:52:55.439155 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:52:55.439160 | orchestrator | 2026-02-02 03:52:55.439165 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-02 03:52:55.439170 | orchestrator | Monday 02 February 2026 03:51:20 +0000 (0:00:00.674) 0:00:12.179 ******* 2026-02-02 03:52:55.439174 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:52:55.439179 | orchestrator | 2026-02-02 03:52:55.439183 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-02 03:52:55.439188 | orchestrator | Monday 02 February 2026 03:51:21 +0000 (0:00:01.327) 0:00:13.507 ******* 2026-02-02 03:52:55.439193 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:52:55.439197 | orchestrator | 2026-02-02 03:52:55.439202 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-02 03:52:55.439208 | orchestrator | Monday 02 February 2026 03:51:24 +0000 (0:00:02.651) 0:00:16.159 ******* 2026-02-02 03:52:55.439213 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:52:55.439218 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:52:55.439223 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:52:55.439229 | orchestrator | 2026-02-02 03:52:55.439234 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-02 03:52:55.439241 | orchestrator | Monday 02 February 2026 03:51:24 +0000 (0:00:00.316) 0:00:16.475 ******* 2026-02-02 03:52:55.439249 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:52:55.439257 | orchestrator | 2026-02-02 03:52:55.439265 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-02 03:52:55.439273 | orchestrator | Monday 02 February 2026 03:51:53 +0000 (0:00:29.147) 0:00:45.622 ******* 2026-02-02 03:52:55.439281 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:52:55.439289 | orchestrator | 2026-02-02 03:52:55.439294 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-02 03:52:55.439300 | orchestrator | Monday 02 February 2026 03:52:07 +0000 (0:00:13.524) 0:00:59.147 ******* 2026-02-02 03:52:55.439305 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:52:55.439310 | orchestrator | 2026-02-02 03:52:55.439316 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-02 03:52:55.439321 | orchestrator | Monday 02 February 2026 03:52:19 +0000 (0:00:11.589) 0:01:10.737 ******* 2026-02-02 03:52:55.439341 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:52:55.439347 | orchestrator | 2026-02-02 03:52:55.439356 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-02 03:52:55.439361 | orchestrator | Monday 02 February 2026 03:52:19 +0000 (0:00:00.720) 0:01:11.458 ******* 2026-02-02 03:52:55.439367 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:52:55.439372 | orchestrator | 2026-02-02 03:52:55.439377 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-02 03:52:55.439383 | orchestrator | Monday 02 February 2026 03:52:20 +0000 (0:00:00.548) 0:01:12.006 ******* 2026-02-02 03:52:55.439389 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:52:55.439394 | orchestrator | 2026-02-02 03:52:55.439400 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-02 03:52:55.439411 | orchestrator | Monday 02 February 2026 03:52:21 +0000 (0:00:00.980) 0:01:12.986 ******* 2026-02-02 03:52:55.439416 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:52:55.439422 | orchestrator | 2026-02-02 03:52:55.439427 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-02 03:52:55.439433 | orchestrator | Monday 02 February 2026 03:52:37 +0000 (0:00:15.810) 0:01:28.797 ******* 2026-02-02 03:52:55.439438 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:52:55.439443 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:52:55.439448 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:52:55.439454 | orchestrator | 2026-02-02 03:52:55.439459 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-02 03:52:55.439464 | orchestrator | 2026-02-02 03:52:55.439469 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-02 03:52:55.439475 | orchestrator | Monday 02 February 2026 03:52:37 +0000 (0:00:00.375) 0:01:29.172 ******* 2026-02-02 03:52:55.439480 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:52:55.439486 | orchestrator | 2026-02-02 03:52:55.439491 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-02 03:52:55.439497 | orchestrator | Monday 02 February 2026 03:52:38 +0000 (0:00:00.872) 0:01:30.045 ******* 2026-02-02 03:52:55.439502 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:52:55.439507 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:52:55.439513 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:52:55.439518 | orchestrator | 2026-02-02 03:52:55.439523 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-02 03:52:55.439528 | orchestrator | Monday 02 February 2026 03:52:40 +0000 (0:00:01.939) 0:01:31.985 ******* 2026-02-02 03:52:55.439534 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:52:55.439539 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:52:55.439544 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:52:55.439549 | orchestrator | 2026-02-02 03:52:55.439554 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-02 03:52:55.439560 | orchestrator | Monday 02 February 2026 03:52:42 +0000 (0:00:02.074) 0:01:34.059 ******* 2026-02-02 03:52:55.439565 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:52:55.439571 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:52:55.439576 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:52:55.439581 | orchestrator | 2026-02-02 03:52:55.439585 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-02 03:52:55.439590 | orchestrator | Monday 02 February 2026 03:52:42 +0000 (0:00:00.350) 0:01:34.409 ******* 2026-02-02 03:52:55.439594 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-02 03:52:55.439599 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:52:55.439603 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-02 03:52:55.439608 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:52:55.439613 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-02 03:52:55.439618 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-02 03:52:55.439622 | orchestrator | 2026-02-02 03:52:55.439627 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-02 03:52:55.439631 | orchestrator | Monday 02 February 2026 03:52:50 +0000 (0:00:07.621) 0:01:42.030 ******* 2026-02-02 03:52:55.439636 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:52:55.439641 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:52:55.439645 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:52:55.439650 | orchestrator | 2026-02-02 03:52:55.439655 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-02 03:52:55.439659 | orchestrator | Monday 02 February 2026 03:52:50 +0000 (0:00:00.338) 0:01:42.369 ******* 2026-02-02 03:52:55.439664 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-02 03:52:55.439668 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:52:55.439673 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-02 03:52:55.439681 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:52:55.439686 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-02 03:52:55.439690 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:52:55.439695 | orchestrator | 2026-02-02 03:52:55.439699 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-02 03:52:55.439704 | orchestrator | Monday 02 February 2026 03:52:51 +0000 (0:00:00.913) 0:01:43.283 ******* 2026-02-02 03:52:55.439709 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:52:55.439713 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:52:55.439718 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:52:55.439722 | orchestrator | 2026-02-02 03:52:55.439727 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-02 03:52:55.439731 | orchestrator | Monday 02 February 2026 03:52:52 +0000 (0:00:00.489) 0:01:43.772 ******* 2026-02-02 03:52:55.439736 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:52:55.439740 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:52:55.439745 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:52:55.439749 | orchestrator | 2026-02-02 03:52:55.439754 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-02 03:52:55.439758 | orchestrator | Monday 02 February 2026 03:52:53 +0000 (0:00:00.948) 0:01:44.720 ******* 2026-02-02 03:52:55.439763 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:52:55.439767 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:52:55.439775 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:54:08.861320 | orchestrator | 2026-02-02 03:54:08.861444 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-02 03:54:08.861462 | orchestrator | Monday 02 February 2026 03:52:55 +0000 (0:00:02.371) 0:01:47.091 ******* 2026-02-02 03:54:08.861475 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:54:08.861487 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:54:08.861499 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:54:08.861512 | orchestrator | 2026-02-02 03:54:08.861525 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-02 03:54:08.861536 | orchestrator | Monday 02 February 2026 03:53:16 +0000 (0:00:20.927) 0:02:08.019 ******* 2026-02-02 03:54:08.861549 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:54:08.861561 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:54:08.861573 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:54:08.861584 | orchestrator | 2026-02-02 03:54:08.861596 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-02 03:54:08.861607 | orchestrator | Monday 02 February 2026 03:53:26 +0000 (0:00:10.420) 0:02:18.440 ******* 2026-02-02 03:54:08.861619 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:54:08.861631 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:54:08.861643 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:54:08.861654 | orchestrator | 2026-02-02 03:54:08.861666 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-02 03:54:08.861677 | orchestrator | Monday 02 February 2026 03:53:27 +0000 (0:00:00.893) 0:02:19.334 ******* 2026-02-02 03:54:08.861689 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:54:08.861700 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:54:08.861712 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:54:08.861722 | orchestrator | 2026-02-02 03:54:08.861734 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-02 03:54:08.861746 | orchestrator | Monday 02 February 2026 03:53:38 +0000 (0:00:10.927) 0:02:30.261 ******* 2026-02-02 03:54:08.861758 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:54:08.861769 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:54:08.861780 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:54:08.861791 | orchestrator | 2026-02-02 03:54:08.861802 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-02 03:54:08.861813 | orchestrator | Monday 02 February 2026 03:53:39 +0000 (0:00:01.112) 0:02:31.374 ******* 2026-02-02 03:54:08.861851 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:54:08.861865 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:54:08.861876 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:54:08.861888 | orchestrator | 2026-02-02 03:54:08.861900 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-02 03:54:08.861912 | orchestrator | 2026-02-02 03:54:08.861923 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-02 03:54:08.861935 | orchestrator | Monday 02 February 2026 03:53:40 +0000 (0:00:00.308) 0:02:31.683 ******* 2026-02-02 03:54:08.861947 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:54:08.861959 | orchestrator | 2026-02-02 03:54:08.862107 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-02 03:54:08.862120 | orchestrator | Monday 02 February 2026 03:53:40 +0000 (0:00:00.852) 0:02:32.536 ******* 2026-02-02 03:54:08.862133 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-02 03:54:08.862145 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-02 03:54:08.862157 | orchestrator | 2026-02-02 03:54:08.862168 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-02 03:54:08.862179 | orchestrator | Monday 02 February 2026 03:53:43 +0000 (0:00:03.048) 0:02:35.584 ******* 2026-02-02 03:54:08.862191 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-02 03:54:08.862315 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-02 03:54:08.862337 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-02 03:54:08.862349 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-02 03:54:08.862361 | orchestrator | 2026-02-02 03:54:08.862373 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-02 03:54:08.862384 | orchestrator | Monday 02 February 2026 03:53:50 +0000 (0:00:06.613) 0:02:42.197 ******* 2026-02-02 03:54:08.862395 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 03:54:08.862407 | orchestrator | 2026-02-02 03:54:08.862418 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-02 03:54:08.862430 | orchestrator | Monday 02 February 2026 03:53:53 +0000 (0:00:03.008) 0:02:45.205 ******* 2026-02-02 03:54:08.862440 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 03:54:08.862451 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-02 03:54:08.862462 | orchestrator | 2026-02-02 03:54:08.862473 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-02 03:54:08.862485 | orchestrator | Monday 02 February 2026 03:53:57 +0000 (0:00:03.776) 0:02:48.982 ******* 2026-02-02 03:54:08.862496 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 03:54:08.862508 | orchestrator | 2026-02-02 03:54:08.862519 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-02 03:54:08.862530 | orchestrator | Monday 02 February 2026 03:54:00 +0000 (0:00:03.271) 0:02:52.253 ******* 2026-02-02 03:54:08.862541 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-02 03:54:08.862553 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-02 03:54:08.862564 | orchestrator | 2026-02-02 03:54:08.862574 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-02 03:54:08.862611 | orchestrator | Monday 02 February 2026 03:54:07 +0000 (0:00:06.980) 0:02:59.234 ******* 2026-02-02 03:54:08.862628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:08.862658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:08.862671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:08.862698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:13.538591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:13.538730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:13.538757 | orchestrator | 2026-02-02 03:54:13.538779 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-02 03:54:13.538799 | orchestrator | Monday 02 February 2026 03:54:08 +0000 (0:00:01.277) 0:03:00.511 ******* 2026-02-02 03:54:13.538818 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:54:13.538837 | orchestrator | 2026-02-02 03:54:13.538856 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-02 03:54:13.538875 | orchestrator | Monday 02 February 2026 03:54:08 +0000 (0:00:00.153) 0:03:00.665 ******* 2026-02-02 03:54:13.538893 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:54:13.538912 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:54:13.538929 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:54:13.538947 | orchestrator | 2026-02-02 03:54:13.538997 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-02 03:54:13.539018 | orchestrator | Monday 02 February 2026 03:54:09 +0000 (0:00:00.336) 0:03:01.001 ******* 2026-02-02 03:54:13.539038 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 03:54:13.539056 | orchestrator | 2026-02-02 03:54:13.539093 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-02 03:54:13.539128 | orchestrator | Monday 02 February 2026 03:54:10 +0000 (0:00:00.730) 0:03:01.732 ******* 2026-02-02 03:54:13.539151 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:54:13.539170 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:54:13.539189 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:54:13.539206 | orchestrator | 2026-02-02 03:54:13.539226 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-02 03:54:13.539245 | orchestrator | Monday 02 February 2026 03:54:10 +0000 (0:00:00.609) 0:03:02.342 ******* 2026-02-02 03:54:13.539266 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:54:13.539286 | orchestrator | 2026-02-02 03:54:13.539305 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-02 03:54:13.539321 | orchestrator | Monday 02 February 2026 03:54:11 +0000 (0:00:00.604) 0:03:02.946 ******* 2026-02-02 03:54:13.539362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:13.539451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:13.539480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:13.539501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:13.539558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:13.539602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:13.539622 | orchestrator | 2026-02-02 03:54:13.539653 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-02 03:54:15.545145 | orchestrator | Monday 02 February 2026 03:54:13 +0000 (0:00:02.242) 0:03:05.189 ******* 2026-02-02 03:54:15.545243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-02 03:54:15.545259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:54:15.545268 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:54:15.545277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-02 03:54:15.545306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:54:15.545327 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:54:15.545353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-02 03:54:15.545362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:54:15.545369 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:54:15.545377 | orchestrator | 2026-02-02 03:54:15.545384 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-02 03:54:15.545389 | orchestrator | Monday 02 February 2026 03:54:14 +0000 (0:00:01.125) 0:03:06.315 ******* 2026-02-02 03:54:15.545394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-02 03:54:15.545404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:54:15.545409 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:54:15.545424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-02 03:54:17.807293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:54:17.807432 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:54:17.807443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-02 03:54:17.807468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:54:17.807473 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:54:17.807478 | orchestrator | 2026-02-02 03:54:17.807483 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-02 03:54:17.807489 | orchestrator | Monday 02 February 2026 03:54:15 +0000 (0:00:00.887) 0:03:07.203 ******* 2026-02-02 03:54:17.807505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:17.807523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:17.807529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:17.807538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:17.807548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:17.807560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:25.104220 | orchestrator | 2026-02-02 03:54:25.104367 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-02 03:54:25.104398 | orchestrator | Monday 02 February 2026 03:54:17 +0000 (0:00:02.256) 0:03:09.459 ******* 2026-02-02 03:54:25.104424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:25.104483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:25.104525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:25.104617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:25.104645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:25.104679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:25.104700 | orchestrator | 2026-02-02 03:54:25.104723 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-02 03:54:25.104742 | orchestrator | Monday 02 February 2026 03:54:24 +0000 (0:00:06.580) 0:03:16.039 ******* 2026-02-02 03:54:25.104767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-02 03:54:25.104782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:54:25.104796 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:54:25.104824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-02 03:54:29.646072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:54:29.646188 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:54:29.646210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-02 03:54:29.646259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 03:54:29.646283 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:54:29.646303 | orchestrator | 2026-02-02 03:54:29.646323 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-02 03:54:29.646344 | orchestrator | Monday 02 February 2026 03:54:25 +0000 (0:00:00.715) 0:03:16.755 ******* 2026-02-02 03:54:29.646364 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:54:29.646381 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:54:29.646402 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:54:29.646421 | orchestrator | 2026-02-02 03:54:29.646435 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-02 03:54:29.646447 | orchestrator | Monday 02 February 2026 03:54:26 +0000 (0:00:01.549) 0:03:18.304 ******* 2026-02-02 03:54:29.646458 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:54:29.646470 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:54:29.646489 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:54:29.646506 | orchestrator | 2026-02-02 03:54:29.646527 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-02 03:54:29.646545 | orchestrator | Monday 02 February 2026 03:54:26 +0000 (0:00:00.329) 0:03:18.634 ******* 2026-02-02 03:54:29.646596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:29.646642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:29.646665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-02 03:54:29.646678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:29.646698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:54:29.646719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:05.193286 | orchestrator | 2026-02-02 03:55:05.193365 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-02 03:55:05.193374 | orchestrator | Monday 02 February 2026 03:54:29 +0000 (0:00:02.178) 0:03:20.813 ******* 2026-02-02 03:55:05.193379 | orchestrator | 2026-02-02 03:55:05.193384 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-02 03:55:05.193389 | orchestrator | Monday 02 February 2026 03:54:29 +0000 (0:00:00.157) 0:03:20.970 ******* 2026-02-02 03:55:05.193394 | orchestrator | 2026-02-02 03:55:05.193399 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-02 03:55:05.193403 | orchestrator | Monday 02 February 2026 03:54:29 +0000 (0:00:00.163) 0:03:21.134 ******* 2026-02-02 03:55:05.193408 | orchestrator | 2026-02-02 03:55:05.193415 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-02 03:55:05.193422 | orchestrator | Monday 02 February 2026 03:54:29 +0000 (0:00:00.161) 0:03:21.295 ******* 2026-02-02 03:55:05.193429 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:55:05.193437 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:55:05.193444 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:55:05.193452 | orchestrator | 2026-02-02 03:55:05.193459 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-02 03:55:05.193467 | orchestrator | Monday 02 February 2026 03:54:47 +0000 (0:00:18.359) 0:03:39.654 ******* 2026-02-02 03:55:05.193474 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:55:05.193478 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:55:05.193483 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:55:05.193489 | orchestrator | 2026-02-02 03:55:05.193496 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-02 03:55:05.193503 | orchestrator | 2026-02-02 03:55:05.193510 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-02 03:55:05.193516 | orchestrator | Monday 02 February 2026 03:54:53 +0000 (0:00:05.954) 0:03:45.609 ******* 2026-02-02 03:55:05.193525 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:55:05.193533 | orchestrator | 2026-02-02 03:55:05.193541 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-02 03:55:05.193562 | orchestrator | Monday 02 February 2026 03:54:55 +0000 (0:00:01.368) 0:03:46.977 ******* 2026-02-02 03:55:05.193569 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:55:05.193576 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:55:05.193583 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:55:05.193613 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:55:05.193622 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:55:05.193629 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:55:05.193636 | orchestrator | 2026-02-02 03:55:05.193643 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-02 03:55:05.193650 | orchestrator | Monday 02 February 2026 03:54:55 +0000 (0:00:00.637) 0:03:47.615 ******* 2026-02-02 03:55:05.193654 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:55:05.193658 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:55:05.193663 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:55:05.193667 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:55:05.193672 | orchestrator | 2026-02-02 03:55:05.193677 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-02 03:55:05.193681 | orchestrator | Monday 02 February 2026 03:54:57 +0000 (0:00:01.105) 0:03:48.720 ******* 2026-02-02 03:55:05.193686 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-02 03:55:05.193691 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-02 03:55:05.193696 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-02 03:55:05.193700 | orchestrator | 2026-02-02 03:55:05.193704 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-02 03:55:05.193709 | orchestrator | Monday 02 February 2026 03:54:57 +0000 (0:00:00.661) 0:03:49.382 ******* 2026-02-02 03:55:05.193713 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-02 03:55:05.193718 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-02 03:55:05.193722 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-02 03:55:05.193726 | orchestrator | 2026-02-02 03:55:05.193731 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-02 03:55:05.193735 | orchestrator | Monday 02 February 2026 03:54:59 +0000 (0:00:01.416) 0:03:50.798 ******* 2026-02-02 03:55:05.193739 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-02 03:55:05.193744 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:55:05.193748 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-02 03:55:05.193753 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:55:05.193760 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-02 03:55:05.193767 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:55:05.193774 | orchestrator | 2026-02-02 03:55:05.193782 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-02 03:55:05.193788 | orchestrator | Monday 02 February 2026 03:54:59 +0000 (0:00:00.558) 0:03:51.357 ******* 2026-02-02 03:55:05.193795 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 03:55:05.193802 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 03:55:05.193809 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-02 03:55:05.193817 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-02 03:55:05.193823 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-02 03:55:05.193830 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:55:05.193838 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 03:55:05.193859 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 03:55:05.193866 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:55:05.193871 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-02 03:55:05.193877 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 03:55:05.193882 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-02 03:55:05.193887 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 03:55:05.193898 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:55:05.193903 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-02 03:55:05.193909 | orchestrator | 2026-02-02 03:55:05.193916 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-02 03:55:05.193923 | orchestrator | Monday 02 February 2026 03:55:00 +0000 (0:00:00.974) 0:03:52.331 ******* 2026-02-02 03:55:05.193930 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:55:05.193938 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:55:05.193946 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:55:05.193952 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:55:05.193961 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:55:05.193967 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:55:05.193972 | orchestrator | 2026-02-02 03:55:05.193977 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-02 03:55:05.193983 | orchestrator | Monday 02 February 2026 03:55:01 +0000 (0:00:01.137) 0:03:53.468 ******* 2026-02-02 03:55:05.193988 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:55:05.194114 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:55:05.194120 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:55:05.194126 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:55:05.194131 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:55:05.194137 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:55:05.194141 | orchestrator | 2026-02-02 03:55:05.194147 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-02 03:55:05.194152 | orchestrator | Monday 02 February 2026 03:55:03 +0000 (0:00:01.603) 0:03:55.071 ******* 2026-02-02 03:55:05.194165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 03:55:05.194174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 03:55:05.194186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 03:55:06.823587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 03:55:06.823713 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 03:55:06.823761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:55:06.823782 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 03:55:06.823800 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:06.823819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:55:06.823885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:55:06.823904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:06.823928 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:06.823946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:06.823962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:06.823981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:06.824056 | orchestrator | 2026-02-02 03:55:06.824077 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-02 03:55:06.824094 | orchestrator | Monday 02 February 2026 03:55:05 +0000 (0:00:02.055) 0:03:57.127 ******* 2026-02-02 03:55:06.824112 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:55:06.824130 | orchestrator | 2026-02-02 03:55:06.824146 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-02 03:55:06.824175 | orchestrator | Monday 02 February 2026 03:55:06 +0000 (0:00:01.351) 0:03:58.478 ******* 2026-02-02 03:55:10.010812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 03:55:10.010943 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 03:55:10.010961 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 03:55:10.010973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:55:10.011060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:55:10.011093 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 03:55:10.011105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:55:10.011121 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 03:55:10.011146 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 03:55:10.011156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:10.011175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:10.011194 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:11.538393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:11.538515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:11.538548 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:11.538559 | orchestrator | 2026-02-02 03:55:11.538569 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-02 03:55:11.538580 | orchestrator | Monday 02 February 2026 03:55:10 +0000 (0:00:03.587) 0:04:02.066 ******* 2026-02-02 03:55:11.538590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 03:55:11.538618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 03:55:11.538647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 03:55:11.538656 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:55:11.538671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 03:55:11.538681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 03:55:11.538691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 03:55:11.538709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 03:55:11.538719 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:55:11.538736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 03:55:13.702195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 03:55:13.702372 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:55:13.702414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 03:55:13.702440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:55:13.702498 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:55:13.702514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 03:55:13.702530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:55:13.702544 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:55:13.702560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 03:55:13.702598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:55:13.702614 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:55:13.702628 | orchestrator | 2026-02-02 03:55:13.702644 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-02 03:55:13.702660 | orchestrator | Monday 02 February 2026 03:55:11 +0000 (0:00:01.529) 0:04:03.595 ******* 2026-02-02 03:55:13.702683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 03:55:13.702706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 03:55:13.702721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 03:55:13.702735 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:55:13.702749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 03:55:13.702773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 03:55:18.238804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 03:55:18.238879 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:55:18.238888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 03:55:18.238906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 03:55:18.238912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 03:55:18.238916 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:55:18.238921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 03:55:18.238935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:55:18.238939 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:55:18.238947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 03:55:18.238956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:55:18.238960 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:55:18.238964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 03:55:18.238975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:55:18.238979 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:55:18.238983 | orchestrator | 2026-02-02 03:55:18.238988 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-02 03:55:18.238993 | orchestrator | Monday 02 February 2026 03:55:14 +0000 (0:00:02.337) 0:04:05.933 ******* 2026-02-02 03:55:18.239026 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:55:18.239030 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:55:18.239034 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:55:18.239039 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 03:55:18.239043 | orchestrator | 2026-02-02 03:55:18.239046 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-02 03:55:18.239050 | orchestrator | Monday 02 February 2026 03:55:15 +0000 (0:00:01.222) 0:04:07.156 ******* 2026-02-02 03:55:18.239054 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 03:55:18.239058 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 03:55:18.239062 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 03:55:18.239066 | orchestrator | 2026-02-02 03:55:18.239070 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-02 03:55:18.239074 | orchestrator | Monday 02 February 2026 03:55:16 +0000 (0:00:00.990) 0:04:08.147 ******* 2026-02-02 03:55:18.239078 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 03:55:18.239081 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 03:55:18.239085 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 03:55:18.239089 | orchestrator | 2026-02-02 03:55:18.239093 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-02 03:55:18.239097 | orchestrator | Monday 02 February 2026 03:55:17 +0000 (0:00:01.167) 0:04:09.314 ******* 2026-02-02 03:55:18.239105 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:55:18.239110 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:55:18.239113 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:55:18.239117 | orchestrator | 2026-02-02 03:55:18.239124 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-02 03:55:39.891626 | orchestrator | Monday 02 February 2026 03:55:18 +0000 (0:00:00.573) 0:04:09.887 ******* 2026-02-02 03:55:39.891728 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:55:39.891738 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:55:39.891745 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:55:39.891751 | orchestrator | 2026-02-02 03:55:39.891758 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-02 03:55:39.891764 | orchestrator | Monday 02 February 2026 03:55:18 +0000 (0:00:00.522) 0:04:10.410 ******* 2026-02-02 03:55:39.891770 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-02 03:55:39.891778 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-02 03:55:39.891784 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-02 03:55:39.891790 | orchestrator | 2026-02-02 03:55:39.891796 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-02 03:55:39.891802 | orchestrator | Monday 02 February 2026 03:55:19 +0000 (0:00:01.143) 0:04:11.554 ******* 2026-02-02 03:55:39.891823 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-02 03:55:39.891829 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-02 03:55:39.891836 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-02 03:55:39.891841 | orchestrator | 2026-02-02 03:55:39.891847 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-02 03:55:39.891853 | orchestrator | Monday 02 February 2026 03:55:21 +0000 (0:00:01.367) 0:04:12.921 ******* 2026-02-02 03:55:39.891859 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-02 03:55:39.891865 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-02 03:55:39.891871 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-02 03:55:39.891876 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-02 03:55:39.891882 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-02 03:55:39.891887 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-02 03:55:39.891893 | orchestrator | 2026-02-02 03:55:39.891899 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-02 03:55:39.891905 | orchestrator | Monday 02 February 2026 03:55:25 +0000 (0:00:03.822) 0:04:16.744 ******* 2026-02-02 03:55:39.891911 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:55:39.891917 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:55:39.891923 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:55:39.891929 | orchestrator | 2026-02-02 03:55:39.891935 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-02 03:55:39.891941 | orchestrator | Monday 02 February 2026 03:55:25 +0000 (0:00:00.356) 0:04:17.101 ******* 2026-02-02 03:55:39.891947 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:55:39.891953 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:55:39.891959 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:55:39.891965 | orchestrator | 2026-02-02 03:55:39.891972 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-02 03:55:39.891979 | orchestrator | Monday 02 February 2026 03:55:25 +0000 (0:00:00.322) 0:04:17.423 ******* 2026-02-02 03:55:39.891985 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:55:39.891991 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:55:39.891997 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:55:39.892002 | orchestrator | 2026-02-02 03:55:39.892064 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-02 03:55:39.892071 | orchestrator | Monday 02 February 2026 03:55:27 +0000 (0:00:01.494) 0:04:18.918 ******* 2026-02-02 03:55:39.892078 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-02 03:55:39.892108 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-02 03:55:39.892115 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-02 03:55:39.892121 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-02 03:55:39.892127 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-02 03:55:39.892132 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-02 03:55:39.892138 | orchestrator | 2026-02-02 03:55:39.892144 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-02 03:55:39.892150 | orchestrator | Monday 02 February 2026 03:55:30 +0000 (0:00:03.226) 0:04:22.145 ******* 2026-02-02 03:55:39.892156 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 03:55:39.892162 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 03:55:39.892167 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 03:55:39.892173 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-02 03:55:39.892179 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:55:39.892184 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-02 03:55:39.892190 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:55:39.892197 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-02 03:55:39.892203 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:55:39.892209 | orchestrator | 2026-02-02 03:55:39.892215 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-02 03:55:39.892221 | orchestrator | Monday 02 February 2026 03:55:33 +0000 (0:00:03.338) 0:04:25.483 ******* 2026-02-02 03:55:39.892228 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:55:39.892233 | orchestrator | 2026-02-02 03:55:39.892257 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-02 03:55:39.892264 | orchestrator | Monday 02 February 2026 03:55:33 +0000 (0:00:00.134) 0:04:25.618 ******* 2026-02-02 03:55:39.892270 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:55:39.892277 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:55:39.892283 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:55:39.892289 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:55:39.892295 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:55:39.892301 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:55:39.892307 | orchestrator | 2026-02-02 03:55:39.892314 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-02 03:55:39.892320 | orchestrator | Monday 02 February 2026 03:55:34 +0000 (0:00:00.888) 0:04:26.506 ******* 2026-02-02 03:55:39.892326 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 03:55:39.892332 | orchestrator | 2026-02-02 03:55:39.892338 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-02 03:55:39.892345 | orchestrator | Monday 02 February 2026 03:55:35 +0000 (0:00:00.786) 0:04:27.292 ******* 2026-02-02 03:55:39.892357 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:55:39.892363 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:55:39.892370 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:55:39.892376 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:55:39.892382 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:55:39.892388 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:55:39.892393 | orchestrator | 2026-02-02 03:55:39.892400 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-02 03:55:39.892406 | orchestrator | Monday 02 February 2026 03:55:36 +0000 (0:00:00.625) 0:04:27.918 ******* 2026-02-02 03:55:39.892421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 03:55:39.892432 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 03:55:39.892438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 03:55:39.892452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:55:40.851086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:55:40.851219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:55:40.851236 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 03:55:40.851251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 03:55:40.851264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 03:55:40.851277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:40.851308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:40.851329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:40.851351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:40.851367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:40.851381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:40.851396 | orchestrator | 2026-02-02 03:55:40.851410 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-02 03:55:40.851424 | orchestrator | Monday 02 February 2026 03:55:40 +0000 (0:00:03.902) 0:04:31.821 ******* 2026-02-02 03:55:40.851446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 03:55:46.134428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 03:55:46.134523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 03:55:46.134532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 03:55:46.134539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 03:55:46.134546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 03:55:46.134567 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:46.134585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:55:46.134592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:46.134599 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:55:46.134605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:55:46.134611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:55:46.134623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:56:05.770396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:56:05.770489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:56:05.770498 | orchestrator | 2026-02-02 03:56:05.770506 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-02 03:56:05.770514 | orchestrator | Monday 02 February 2026 03:55:46 +0000 (0:00:06.561) 0:04:38.382 ******* 2026-02-02 03:56:05.770520 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:56:05.770527 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:56:05.770533 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:56:05.770539 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:56:05.770544 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:56:05.770550 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:56:05.770556 | orchestrator | 2026-02-02 03:56:05.770562 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-02 03:56:05.770568 | orchestrator | Monday 02 February 2026 03:55:48 +0000 (0:00:01.641) 0:04:40.025 ******* 2026-02-02 03:56:05.770574 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-02 03:56:05.770581 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-02 03:56:05.770587 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-02 03:56:05.770593 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-02 03:56:05.770599 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-02 03:56:05.770604 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-02 03:56:05.770611 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-02 03:56:05.770617 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:56:05.770623 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-02 03:56:05.770629 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:56:05.770635 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-02 03:56:05.770641 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:56:05.770647 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-02 03:56:05.770653 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-02 03:56:05.770677 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-02 03:56:05.770683 | orchestrator | 2026-02-02 03:56:05.770690 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-02 03:56:05.770699 | orchestrator | Monday 02 February 2026 03:55:52 +0000 (0:00:03.772) 0:04:43.797 ******* 2026-02-02 03:56:05.770708 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:56:05.770716 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:56:05.770722 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:56:05.770730 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:56:05.770740 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:56:05.770746 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:56:05.770752 | orchestrator | 2026-02-02 03:56:05.770758 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-02 03:56:05.770764 | orchestrator | Monday 02 February 2026 03:55:52 +0000 (0:00:00.845) 0:04:44.643 ******* 2026-02-02 03:56:05.770770 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-02 03:56:05.770776 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-02 03:56:05.770782 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-02 03:56:05.770787 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-02 03:56:05.770805 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-02 03:56:05.770811 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-02 03:56:05.770821 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-02 03:56:05.770827 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-02 03:56:05.770833 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-02 03:56:05.770839 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-02 03:56:05.770845 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:56:05.770851 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-02 03:56:05.770857 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:56:05.770863 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-02 03:56:05.770868 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:56:05.770874 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-02 03:56:05.770880 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-02 03:56:05.770886 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-02 03:56:05.770894 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-02 03:56:05.770904 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-02 03:56:05.770913 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-02 03:56:05.770921 | orchestrator | 2026-02-02 03:56:05.770930 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-02 03:56:05.770945 | orchestrator | Monday 02 February 2026 03:55:58 +0000 (0:00:05.527) 0:04:50.171 ******* 2026-02-02 03:56:05.770965 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 03:56:05.770975 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 03:56:05.770985 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 03:56:05.770994 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-02 03:56:05.771003 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-02 03:56:05.771011 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-02 03:56:05.771044 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-02 03:56:05.771055 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-02 03:56:05.771064 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-02 03:56:05.771073 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 03:56:05.771082 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 03:56:05.771091 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 03:56:05.771100 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-02 03:56:05.771109 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-02 03:56:05.771118 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:56:05.771126 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-02 03:56:05.771134 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-02 03:56:05.771143 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:56:05.771152 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-02 03:56:05.771161 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:56:05.771168 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-02 03:56:05.771177 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-02 03:56:05.771186 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-02 03:56:05.771196 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-02 03:56:05.771206 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-02 03:56:05.771223 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-02 03:56:10.552141 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-02 03:56:10.552221 | orchestrator | 2026-02-02 03:56:10.552244 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-02 03:56:10.552252 | orchestrator | Monday 02 February 2026 03:56:05 +0000 (0:00:07.236) 0:04:57.407 ******* 2026-02-02 03:56:10.552258 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:56:10.552264 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:56:10.552270 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:56:10.552276 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:56:10.552281 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:56:10.552287 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:56:10.552293 | orchestrator | 2026-02-02 03:56:10.552298 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-02 03:56:10.552304 | orchestrator | Monday 02 February 2026 03:56:06 +0000 (0:00:00.645) 0:04:58.053 ******* 2026-02-02 03:56:10.552309 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:56:10.552329 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:56:10.552335 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:56:10.552340 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:56:10.552346 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:56:10.552351 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:56:10.552357 | orchestrator | 2026-02-02 03:56:10.552362 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-02 03:56:10.552368 | orchestrator | Monday 02 February 2026 03:56:07 +0000 (0:00:00.869) 0:04:58.922 ******* 2026-02-02 03:56:10.552373 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:56:10.552379 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:56:10.552385 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:56:10.552390 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:56:10.552396 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:56:10.552401 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:56:10.552406 | orchestrator | 2026-02-02 03:56:10.552412 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-02 03:56:10.552418 | orchestrator | Monday 02 February 2026 03:56:09 +0000 (0:00:01.863) 0:05:00.786 ******* 2026-02-02 03:56:10.552426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 03:56:10.552435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 03:56:10.552442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 03:56:10.552464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 03:56:10.552476 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:56:10.552482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 03:56:10.552488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 03:56:10.552493 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:56:10.552499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-02 03:56:10.552505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-02 03:56:10.552534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-02 03:56:14.253826 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:56:14.253947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 03:56:14.253965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:56:14.253974 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:56:14.253982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 03:56:14.253990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:56:14.253997 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:56:14.254005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-02 03:56:14.254013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 03:56:14.254126 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:56:14.254135 | orchestrator | 2026-02-02 03:56:14.254144 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-02 03:56:14.254153 | orchestrator | Monday 02 February 2026 03:56:10 +0000 (0:00:01.692) 0:05:02.479 ******* 2026-02-02 03:56:14.254186 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-02 03:56:14.254211 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-02 03:56:14.254225 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:56:14.254233 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-02 03:56:14.254241 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-02 03:56:14.254248 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:56:14.254257 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-02 03:56:14.254270 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-02 03:56:14.254281 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:56:14.254293 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-02 03:56:14.254305 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-02 03:56:14.254317 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:56:14.254328 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-02 03:56:14.254340 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-02 03:56:14.254352 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:56:14.254365 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-02 03:56:14.254376 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-02 03:56:14.254389 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:56:14.254401 | orchestrator | 2026-02-02 03:56:14.254414 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-02 03:56:14.254428 | orchestrator | Monday 02 February 2026 03:56:11 +0000 (0:00:00.741) 0:05:03.220 ******* 2026-02-02 03:56:14.254442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 03:56:14.254457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 03:56:14.254474 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-02 03:56:14.254495 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 03:56:16.505618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:56:16.505711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:56:16.505720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 03:56:16.505728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-02 03:56:16.505750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-02 03:56:16.505757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:56:16.505793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:56:16.505802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:56:16.505807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-02 03:56:16.505813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:56:16.505823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 03:56:16.505829 | orchestrator | 2026-02-02 03:56:16.505835 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-02 03:56:16.505843 | orchestrator | Monday 02 February 2026 03:56:14 +0000 (0:00:02.969) 0:05:06.189 ******* 2026-02-02 03:56:16.505848 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:56:16.505854 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:56:16.505860 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:56:16.505865 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:56:16.505870 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:56:16.505875 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:56:16.505880 | orchestrator | 2026-02-02 03:56:16.505885 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-02 03:56:16.505890 | orchestrator | Monday 02 February 2026 03:56:15 +0000 (0:00:00.843) 0:05:07.033 ******* 2026-02-02 03:56:16.505895 | orchestrator | 2026-02-02 03:56:16.505901 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-02 03:56:16.505906 | orchestrator | Monday 02 February 2026 03:56:15 +0000 (0:00:00.158) 0:05:07.191 ******* 2026-02-02 03:56:16.505911 | orchestrator | 2026-02-02 03:56:16.505916 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-02 03:56:16.505925 | orchestrator | Monday 02 February 2026 03:56:15 +0000 (0:00:00.145) 0:05:07.337 ******* 2026-02-02 03:56:16.505930 | orchestrator | 2026-02-02 03:56:16.505936 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-02 03:56:16.505944 | orchestrator | Monday 02 February 2026 03:56:15 +0000 (0:00:00.192) 0:05:07.529 ******* 2026-02-02 03:59:35.655480 | orchestrator | 2026-02-02 03:59:35.655562 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-02 03:59:35.655571 | orchestrator | Monday 02 February 2026 03:56:16 +0000 (0:00:00.149) 0:05:07.679 ******* 2026-02-02 03:59:35.655576 | orchestrator | 2026-02-02 03:59:35.655580 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-02 03:59:35.655585 | orchestrator | Monday 02 February 2026 03:56:16 +0000 (0:00:00.138) 0:05:07.818 ******* 2026-02-02 03:59:35.655589 | orchestrator | 2026-02-02 03:59:35.655594 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-02 03:59:35.655598 | orchestrator | Monday 02 February 2026 03:56:16 +0000 (0:00:00.334) 0:05:08.152 ******* 2026-02-02 03:59:35.655603 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:59:35.655608 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:59:35.655612 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:59:35.655617 | orchestrator | 2026-02-02 03:59:35.655621 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-02 03:59:35.655625 | orchestrator | Monday 02 February 2026 03:56:28 +0000 (0:00:12.464) 0:05:20.617 ******* 2026-02-02 03:59:35.655629 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:59:35.655634 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:59:35.655638 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:59:35.655642 | orchestrator | 2026-02-02 03:59:35.655648 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-02 03:59:35.655676 | orchestrator | Monday 02 February 2026 03:56:49 +0000 (0:00:20.124) 0:05:40.741 ******* 2026-02-02 03:59:35.655684 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:59:35.655690 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:59:35.655697 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:59:35.655703 | orchestrator | 2026-02-02 03:59:35.655709 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-02 03:59:35.655715 | orchestrator | Monday 02 February 2026 03:57:13 +0000 (0:00:24.924) 0:06:05.666 ******* 2026-02-02 03:59:35.655722 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:59:35.655727 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:59:35.655733 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:59:35.655740 | orchestrator | 2026-02-02 03:59:35.655747 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-02 03:59:35.655754 | orchestrator | Monday 02 February 2026 03:57:58 +0000 (0:00:44.579) 0:06:50.246 ******* 2026-02-02 03:59:35.655760 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:59:35.655767 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:59:35.655773 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:59:35.655780 | orchestrator | 2026-02-02 03:59:35.655786 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-02 03:59:35.655793 | orchestrator | Monday 02 February 2026 03:57:59 +0000 (0:00:00.845) 0:06:51.091 ******* 2026-02-02 03:59:35.655799 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:59:35.655806 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:59:35.655812 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:59:35.655819 | orchestrator | 2026-02-02 03:59:35.655826 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-02 03:59:35.655832 | orchestrator | Monday 02 February 2026 03:58:00 +0000 (0:00:00.780) 0:06:51.872 ******* 2026-02-02 03:59:35.655836 | orchestrator | changed: [testbed-node-4] 2026-02-02 03:59:35.655840 | orchestrator | changed: [testbed-node-3] 2026-02-02 03:59:35.655845 | orchestrator | changed: [testbed-node-5] 2026-02-02 03:59:35.655849 | orchestrator | 2026-02-02 03:59:35.655854 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-02 03:59:35.655859 | orchestrator | Monday 02 February 2026 03:58:28 +0000 (0:00:28.215) 0:07:20.087 ******* 2026-02-02 03:59:35.655863 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:59:35.655867 | orchestrator | 2026-02-02 03:59:35.655872 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-02 03:59:35.655876 | orchestrator | Monday 02 February 2026 03:58:28 +0000 (0:00:00.149) 0:07:20.237 ******* 2026-02-02 03:59:35.655880 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:59:35.655884 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:35.655888 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:59:35.655892 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:59:35.655896 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:59:35.655901 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-02 03:59:35.655907 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:59:35.655912 | orchestrator | 2026-02-02 03:59:35.655918 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-02 03:59:35.655924 | orchestrator | Monday 02 February 2026 03:58:50 +0000 (0:00:22.131) 0:07:42.369 ******* 2026-02-02 03:59:35.655932 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:59:35.655936 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:59:35.655940 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:59:35.655945 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:59:35.655949 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:35.655953 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:59:35.655957 | orchestrator | 2026-02-02 03:59:35.655961 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-02 03:59:35.655971 | orchestrator | Monday 02 February 2026 03:59:00 +0000 (0:00:09.509) 0:07:51.878 ******* 2026-02-02 03:59:35.655976 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:59:35.655980 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:59:35.655984 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:59:35.655988 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:35.655992 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:59:35.655997 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-02-02 03:59:35.656001 | orchestrator | 2026-02-02 03:59:35.656016 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-02 03:59:35.656020 | orchestrator | Monday 02 February 2026 03:59:05 +0000 (0:00:04.859) 0:07:56.737 ******* 2026-02-02 03:59:35.656025 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:59:35.656029 | orchestrator | 2026-02-02 03:59:35.656046 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-02 03:59:35.656050 | orchestrator | Monday 02 February 2026 03:59:17 +0000 (0:00:12.641) 0:08:09.379 ******* 2026-02-02 03:59:35.656054 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:59:35.656058 | orchestrator | 2026-02-02 03:59:35.656062 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-02 03:59:35.656066 | orchestrator | Monday 02 February 2026 03:59:19 +0000 (0:00:01.513) 0:08:10.893 ******* 2026-02-02 03:59:35.656071 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:59:35.656075 | orchestrator | 2026-02-02 03:59:35.656079 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-02 03:59:35.656083 | orchestrator | Monday 02 February 2026 03:59:20 +0000 (0:00:01.568) 0:08:12.461 ******* 2026-02-02 03:59:35.656087 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 03:59:35.656091 | orchestrator | 2026-02-02 03:59:35.656095 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-02 03:59:35.656099 | orchestrator | Monday 02 February 2026 03:59:30 +0000 (0:00:09.702) 0:08:22.164 ******* 2026-02-02 03:59:35.656145 | orchestrator | ok: [testbed-node-3] 2026-02-02 03:59:35.656150 | orchestrator | ok: [testbed-node-4] 2026-02-02 03:59:35.656155 | orchestrator | ok: [testbed-node-5] 2026-02-02 03:59:35.656159 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:59:35.656163 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:59:35.656167 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:59:35.656171 | orchestrator | 2026-02-02 03:59:35.656175 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-02 03:59:35.656179 | orchestrator | 2026-02-02 03:59:35.656183 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-02 03:59:35.656187 | orchestrator | Monday 02 February 2026 03:59:32 +0000 (0:00:01.853) 0:08:24.018 ******* 2026-02-02 03:59:35.656191 | orchestrator | changed: [testbed-node-0] 2026-02-02 03:59:35.656196 | orchestrator | changed: [testbed-node-1] 2026-02-02 03:59:35.656200 | orchestrator | changed: [testbed-node-2] 2026-02-02 03:59:35.656204 | orchestrator | 2026-02-02 03:59:35.656208 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-02 03:59:35.656212 | orchestrator | 2026-02-02 03:59:35.656216 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-02 03:59:35.656220 | orchestrator | Monday 02 February 2026 03:59:33 +0000 (0:00:01.250) 0:08:25.268 ******* 2026-02-02 03:59:35.656224 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:35.656228 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:59:35.656232 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:59:35.656237 | orchestrator | 2026-02-02 03:59:35.656241 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-02 03:59:35.656245 | orchestrator | 2026-02-02 03:59:35.656249 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-02 03:59:35.656253 | orchestrator | Monday 02 February 2026 03:59:34 +0000 (0:00:00.548) 0:08:25.817 ******* 2026-02-02 03:59:35.656261 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-02 03:59:35.656266 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-02 03:59:35.656270 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-02 03:59:35.656275 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-02 03:59:35.656279 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-02 03:59:35.656283 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-02 03:59:35.656287 | orchestrator | skipping: [testbed-node-3] 2026-02-02 03:59:35.656291 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-02 03:59:35.656295 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-02 03:59:35.656299 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-02 03:59:35.656303 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-02 03:59:35.656308 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-02 03:59:35.656312 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-02 03:59:35.656316 | orchestrator | skipping: [testbed-node-4] 2026-02-02 03:59:35.656320 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-02 03:59:35.656324 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-02 03:59:35.656328 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-02 03:59:35.656332 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-02 03:59:35.656336 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-02 03:59:35.656341 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-02 03:59:35.656345 | orchestrator | skipping: [testbed-node-5] 2026-02-02 03:59:35.656349 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-02 03:59:35.656353 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-02 03:59:35.656357 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-02 03:59:35.656361 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-02 03:59:35.656365 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-02 03:59:35.656369 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-02 03:59:35.656373 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:35.656377 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-02 03:59:35.656381 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-02 03:59:35.656389 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-02 03:59:35.656393 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-02 03:59:35.656397 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-02 03:59:35.656401 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-02 03:59:35.656409 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:59:38.320140 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-02 03:59:38.320275 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-02 03:59:38.320295 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-02 03:59:38.320309 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-02 03:59:38.320320 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-02 03:59:38.320331 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-02 03:59:38.320342 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:59:38.320352 | orchestrator | 2026-02-02 03:59:38.320364 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-02 03:59:38.320375 | orchestrator | 2026-02-02 03:59:38.320386 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-02 03:59:38.320427 | orchestrator | Monday 02 February 2026 03:59:35 +0000 (0:00:01.493) 0:08:27.310 ******* 2026-02-02 03:59:38.320441 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-02 03:59:38.320452 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-02 03:59:38.320464 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:38.320475 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-02 03:59:38.320487 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-02 03:59:38.320498 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:59:38.320510 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-02 03:59:38.320521 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-02 03:59:38.320533 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:59:38.320543 | orchestrator | 2026-02-02 03:59:38.320550 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-02 03:59:38.320557 | orchestrator | 2026-02-02 03:59:38.320564 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-02 03:59:38.320571 | orchestrator | Monday 02 February 2026 03:59:36 +0000 (0:00:00.805) 0:08:28.116 ******* 2026-02-02 03:59:38.320578 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:38.320584 | orchestrator | 2026-02-02 03:59:38.320591 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-02 03:59:38.320598 | orchestrator | 2026-02-02 03:59:38.320604 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-02 03:59:38.320611 | orchestrator | Monday 02 February 2026 03:59:37 +0000 (0:00:00.706) 0:08:28.823 ******* 2026-02-02 03:59:38.320618 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:38.320624 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:59:38.320631 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:59:38.320638 | orchestrator | 2026-02-02 03:59:38.320644 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 03:59:38.320651 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 03:59:38.320662 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-02 03:59:38.320674 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-02 03:59:38.320685 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-02 03:59:38.320697 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-02 03:59:38.320708 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-02 03:59:38.320719 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-02 03:59:38.320729 | orchestrator | 2026-02-02 03:59:38.320739 | orchestrator | 2026-02-02 03:59:38.320748 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 03:59:38.320759 | orchestrator | Monday 02 February 2026 03:59:37 +0000 (0:00:00.685) 0:08:29.509 ******* 2026-02-02 03:59:38.320771 | orchestrator | =============================================================================== 2026-02-02 03:59:38.320782 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.58s 2026-02-02 03:59:38.320793 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.15s 2026-02-02 03:59:38.320804 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 28.22s 2026-02-02 03:59:38.320827 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.92s 2026-02-02 03:59:38.320839 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.13s 2026-02-02 03:59:38.320851 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.93s 2026-02-02 03:59:38.320862 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 20.12s 2026-02-02 03:59:38.320889 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.36s 2026-02-02 03:59:38.320901 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 15.81s 2026-02-02 03:59:38.320914 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.52s 2026-02-02 03:59:38.320950 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.64s 2026-02-02 03:59:38.320963 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.46s 2026-02-02 03:59:38.320974 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.59s 2026-02-02 03:59:38.320985 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.93s 2026-02-02 03:59:38.320997 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.42s 2026-02-02 03:59:38.321009 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.70s 2026-02-02 03:59:38.321021 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.51s 2026-02-02 03:59:38.321032 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.62s 2026-02-02 03:59:38.321042 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.24s 2026-02-02 03:59:38.321049 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 6.98s 2026-02-02 03:59:40.979751 | orchestrator | 2026-02-02 03:59:40 | INFO  | Task 8f24f678-88db-46ee-800e-e183414ea304 (horizon) was prepared for execution. 2026-02-02 03:59:40.979890 | orchestrator | 2026-02-02 03:59:40 | INFO  | It takes a moment until task 8f24f678-88db-46ee-800e-e183414ea304 (horizon) has been started and output is visible here. 2026-02-02 03:59:48.730870 | orchestrator | 2026-02-02 03:59:48.730958 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 03:59:48.730971 | orchestrator | 2026-02-02 03:59:48.730977 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 03:59:48.730985 | orchestrator | Monday 02 February 2026 03:59:45 +0000 (0:00:00.289) 0:00:00.289 ******* 2026-02-02 03:59:48.730991 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:59:48.730998 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:59:48.731004 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:59:48.731009 | orchestrator | 2026-02-02 03:59:48.731015 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 03:59:48.731022 | orchestrator | Monday 02 February 2026 03:59:45 +0000 (0:00:00.347) 0:00:00.637 ******* 2026-02-02 03:59:48.731028 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-02 03:59:48.731034 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-02 03:59:48.731041 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-02 03:59:48.731047 | orchestrator | 2026-02-02 03:59:48.731054 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-02 03:59:48.731059 | orchestrator | 2026-02-02 03:59:48.731065 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-02 03:59:48.731072 | orchestrator | Monday 02 February 2026 03:59:46 +0000 (0:00:00.479) 0:00:01.117 ******* 2026-02-02 03:59:48.731079 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 03:59:48.731086 | orchestrator | 2026-02-02 03:59:48.731093 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-02 03:59:48.731099 | orchestrator | Monday 02 February 2026 03:59:46 +0000 (0:00:00.581) 0:00:01.698 ******* 2026-02-02 03:59:48.731156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 03:59:48.731183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 03:59:48.731200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 03:59:48.731207 | orchestrator | 2026-02-02 03:59:48.731214 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-02 03:59:48.731220 | orchestrator | Monday 02 February 2026 03:59:48 +0000 (0:00:01.145) 0:00:02.844 ******* 2026-02-02 03:59:48.731226 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:59:48.731232 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:59:48.731236 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:59:48.731239 | orchestrator | 2026-02-02 03:59:48.731243 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-02 03:59:48.731247 | orchestrator | Monday 02 February 2026 03:59:48 +0000 (0:00:00.498) 0:00:03.342 ******* 2026-02-02 03:59:48.731254 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-02 03:59:55.058233 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-02 03:59:55.058337 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-02 03:59:55.058351 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-02 03:59:55.058361 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-02 03:59:55.058370 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-02 03:59:55.058379 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-02 03:59:55.058388 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-02 03:59:55.058419 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-02 03:59:55.058428 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-02 03:59:55.058438 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-02 03:59:55.058446 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-02 03:59:55.058455 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-02 03:59:55.058464 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-02 03:59:55.058473 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-02 03:59:55.058481 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-02 03:59:55.058490 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-02 03:59:55.058498 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-02 03:59:55.058507 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-02 03:59:55.058516 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-02 03:59:55.058525 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-02 03:59:55.058533 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-02 03:59:55.058542 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-02 03:59:55.058554 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-02 03:59:55.058570 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-02 03:59:55.058588 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-02 03:59:55.058603 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-02 03:59:55.058617 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-02 03:59:55.058648 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-02 03:59:55.058666 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-02 03:59:55.058682 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-02 03:59:55.058697 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-02 03:59:55.058713 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-02 03:59:55.058730 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-02 03:59:55.058745 | orchestrator | 2026-02-02 03:59:55.058757 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 03:59:55.058769 | orchestrator | Monday 02 February 2026 03:59:49 +0000 (0:00:00.850) 0:00:04.193 ******* 2026-02-02 03:59:55.058780 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:59:55.058801 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:59:55.058811 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:59:55.058822 | orchestrator | 2026-02-02 03:59:55.058832 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 03:59:55.058843 | orchestrator | Monday 02 February 2026 03:59:49 +0000 (0:00:00.330) 0:00:04.523 ******* 2026-02-02 03:59:55.058853 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:55.058865 | orchestrator | 2026-02-02 03:59:55.058892 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 03:59:55.058902 | orchestrator | Monday 02 February 2026 03:59:50 +0000 (0:00:00.342) 0:00:04.866 ******* 2026-02-02 03:59:55.058911 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:55.058919 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:59:55.058928 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:59:55.058937 | orchestrator | 2026-02-02 03:59:55.058946 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 03:59:55.058954 | orchestrator | Monday 02 February 2026 03:59:50 +0000 (0:00:00.319) 0:00:05.185 ******* 2026-02-02 03:59:55.058963 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:59:55.058972 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:59:55.058980 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:59:55.058989 | orchestrator | 2026-02-02 03:59:55.058997 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 03:59:55.059006 | orchestrator | Monday 02 February 2026 03:59:50 +0000 (0:00:00.325) 0:00:05.511 ******* 2026-02-02 03:59:55.059015 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:55.059024 | orchestrator | 2026-02-02 03:59:55.059032 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 03:59:55.059041 | orchestrator | Monday 02 February 2026 03:59:50 +0000 (0:00:00.142) 0:00:05.654 ******* 2026-02-02 03:59:55.059050 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:55.059059 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:59:55.059068 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:59:55.059076 | orchestrator | 2026-02-02 03:59:55.059085 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 03:59:55.059094 | orchestrator | Monday 02 February 2026 03:59:51 +0000 (0:00:00.320) 0:00:05.974 ******* 2026-02-02 03:59:55.059102 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:59:55.059132 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:59:55.059142 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:59:55.059151 | orchestrator | 2026-02-02 03:59:55.059160 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 03:59:55.059169 | orchestrator | Monday 02 February 2026 03:59:51 +0000 (0:00:00.560) 0:00:06.534 ******* 2026-02-02 03:59:55.059177 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:55.059186 | orchestrator | 2026-02-02 03:59:55.059195 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 03:59:55.059203 | orchestrator | Monday 02 February 2026 03:59:51 +0000 (0:00:00.133) 0:00:06.668 ******* 2026-02-02 03:59:55.059212 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:55.059221 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:59:55.059229 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:59:55.059238 | orchestrator | 2026-02-02 03:59:55.059247 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 03:59:55.059255 | orchestrator | Monday 02 February 2026 03:59:52 +0000 (0:00:00.287) 0:00:06.956 ******* 2026-02-02 03:59:55.059264 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:59:55.059273 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:59:55.059282 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:59:55.059290 | orchestrator | 2026-02-02 03:59:55.059299 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 03:59:55.059308 | orchestrator | Monday 02 February 2026 03:59:52 +0000 (0:00:00.320) 0:00:07.277 ******* 2026-02-02 03:59:55.059316 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:55.059325 | orchestrator | 2026-02-02 03:59:55.059340 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 03:59:55.059348 | orchestrator | Monday 02 February 2026 03:59:52 +0000 (0:00:00.143) 0:00:07.420 ******* 2026-02-02 03:59:55.059357 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:55.059366 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:59:55.059374 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:59:55.059383 | orchestrator | 2026-02-02 03:59:55.059392 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 03:59:55.059401 | orchestrator | Monday 02 February 2026 03:59:53 +0000 (0:00:00.499) 0:00:07.920 ******* 2026-02-02 03:59:55.059409 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:59:55.059418 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:59:55.059432 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:59:55.059441 | orchestrator | 2026-02-02 03:59:55.059450 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 03:59:55.059458 | orchestrator | Monday 02 February 2026 03:59:53 +0000 (0:00:00.321) 0:00:08.242 ******* 2026-02-02 03:59:55.059467 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:55.059476 | orchestrator | 2026-02-02 03:59:55.059485 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 03:59:55.059493 | orchestrator | Monday 02 February 2026 03:59:53 +0000 (0:00:00.161) 0:00:08.404 ******* 2026-02-02 03:59:55.059502 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:55.059511 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:59:55.059520 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:59:55.059528 | orchestrator | 2026-02-02 03:59:55.059539 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 03:59:55.059554 | orchestrator | Monday 02 February 2026 03:59:54 +0000 (0:00:00.325) 0:00:08.729 ******* 2026-02-02 03:59:55.059569 | orchestrator | ok: [testbed-node-0] 2026-02-02 03:59:55.059584 | orchestrator | ok: [testbed-node-1] 2026-02-02 03:59:55.059599 | orchestrator | ok: [testbed-node-2] 2026-02-02 03:59:55.059612 | orchestrator | 2026-02-02 03:59:55.059627 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 03:59:55.059642 | orchestrator | Monday 02 February 2026 03:59:54 +0000 (0:00:00.334) 0:00:09.063 ******* 2026-02-02 03:59:55.059657 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:55.059671 | orchestrator | 2026-02-02 03:59:55.059686 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 03:59:55.059701 | orchestrator | Monday 02 February 2026 03:59:54 +0000 (0:00:00.148) 0:00:09.211 ******* 2026-02-02 03:59:55.059717 | orchestrator | skipping: [testbed-node-0] 2026-02-02 03:59:55.059732 | orchestrator | skipping: [testbed-node-1] 2026-02-02 03:59:55.059747 | orchestrator | skipping: [testbed-node-2] 2026-02-02 03:59:55.059762 | orchestrator | 2026-02-02 03:59:55.059777 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 03:59:55.059794 | orchestrator | Monday 02 February 2026 03:59:55 +0000 (0:00:00.555) 0:00:09.768 ******* 2026-02-02 04:00:08.483279 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:00:08.483399 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:00:08.483416 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:00:08.483428 | orchestrator | 2026-02-02 04:00:08.483441 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 04:00:08.483454 | orchestrator | Monday 02 February 2026 03:59:55 +0000 (0:00:00.316) 0:00:10.084 ******* 2026-02-02 04:00:08.483465 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:00:08.483477 | orchestrator | 2026-02-02 04:00:08.483488 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 04:00:08.483500 | orchestrator | Monday 02 February 2026 03:59:55 +0000 (0:00:00.119) 0:00:10.204 ******* 2026-02-02 04:00:08.483511 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:00:08.483524 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:00:08.483543 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:00:08.483562 | orchestrator | 2026-02-02 04:00:08.483580 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 04:00:08.483630 | orchestrator | Monday 02 February 2026 03:59:55 +0000 (0:00:00.304) 0:00:10.508 ******* 2026-02-02 04:00:08.483651 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:00:08.483670 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:00:08.483689 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:00:08.483709 | orchestrator | 2026-02-02 04:00:08.483729 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 04:00:08.483749 | orchestrator | Monday 02 February 2026 03:59:56 +0000 (0:00:00.554) 0:00:11.062 ******* 2026-02-02 04:00:08.483768 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:00:08.483786 | orchestrator | 2026-02-02 04:00:08.483804 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 04:00:08.483824 | orchestrator | Monday 02 February 2026 03:59:56 +0000 (0:00:00.142) 0:00:11.204 ******* 2026-02-02 04:00:08.483845 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:00:08.483865 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:00:08.483881 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:00:08.483894 | orchestrator | 2026-02-02 04:00:08.483908 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 04:00:08.483921 | orchestrator | Monday 02 February 2026 03:59:56 +0000 (0:00:00.303) 0:00:11.508 ******* 2026-02-02 04:00:08.483937 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:00:08.483956 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:00:08.483976 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:00:08.483995 | orchestrator | 2026-02-02 04:00:08.484015 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 04:00:08.484034 | orchestrator | Monday 02 February 2026 03:59:57 +0000 (0:00:00.352) 0:00:11.860 ******* 2026-02-02 04:00:08.484071 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:00:08.484095 | orchestrator | 2026-02-02 04:00:08.484110 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 04:00:08.484147 | orchestrator | Monday 02 February 2026 03:59:57 +0000 (0:00:00.132) 0:00:11.993 ******* 2026-02-02 04:00:08.484158 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:00:08.484169 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:00:08.484180 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:00:08.484191 | orchestrator | 2026-02-02 04:00:08.484202 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-02 04:00:08.484213 | orchestrator | Monday 02 February 2026 03:59:57 +0000 (0:00:00.552) 0:00:12.546 ******* 2026-02-02 04:00:08.484224 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:00:08.484235 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:00:08.484246 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:00:08.484258 | orchestrator | 2026-02-02 04:00:08.484281 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-02 04:00:08.484310 | orchestrator | Monday 02 February 2026 03:59:58 +0000 (0:00:00.332) 0:00:12.878 ******* 2026-02-02 04:00:08.484327 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:00:08.484343 | orchestrator | 2026-02-02 04:00:08.484359 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-02 04:00:08.484375 | orchestrator | Monday 02 February 2026 03:59:58 +0000 (0:00:00.136) 0:00:13.014 ******* 2026-02-02 04:00:08.484413 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:00:08.484431 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:00:08.484448 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:00:08.484465 | orchestrator | 2026-02-02 04:00:08.484482 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-02 04:00:08.484499 | orchestrator | Monday 02 February 2026 03:59:58 +0000 (0:00:00.307) 0:00:13.322 ******* 2026-02-02 04:00:08.484516 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:00:08.484531 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:00:08.484548 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:00:08.484567 | orchestrator | 2026-02-02 04:00:08.484583 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-02 04:00:08.484618 | orchestrator | Monday 02 February 2026 04:00:00 +0000 (0:00:01.592) 0:00:14.914 ******* 2026-02-02 04:00:08.484635 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-02 04:00:08.484654 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-02 04:00:08.484671 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-02 04:00:08.484689 | orchestrator | 2026-02-02 04:00:08.484706 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-02 04:00:08.484725 | orchestrator | Monday 02 February 2026 04:00:02 +0000 (0:00:01.859) 0:00:16.773 ******* 2026-02-02 04:00:08.484743 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-02 04:00:08.484762 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-02 04:00:08.484781 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-02 04:00:08.484801 | orchestrator | 2026-02-02 04:00:08.484820 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-02 04:00:08.484866 | orchestrator | Monday 02 February 2026 04:00:03 +0000 (0:00:01.850) 0:00:18.624 ******* 2026-02-02 04:00:08.484879 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-02 04:00:08.484890 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-02 04:00:08.484901 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-02 04:00:08.484912 | orchestrator | 2026-02-02 04:00:08.484923 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-02 04:00:08.484934 | orchestrator | Monday 02 February 2026 04:00:05 +0000 (0:00:01.535) 0:00:20.160 ******* 2026-02-02 04:00:08.484945 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:00:08.484956 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:00:08.484967 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:00:08.484979 | orchestrator | 2026-02-02 04:00:08.484990 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-02 04:00:08.485001 | orchestrator | Monday 02 February 2026 04:00:05 +0000 (0:00:00.308) 0:00:20.468 ******* 2026-02-02 04:00:08.485012 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:00:08.485023 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:00:08.485035 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:00:08.485053 | orchestrator | 2026-02-02 04:00:08.485070 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-02 04:00:08.485089 | orchestrator | Monday 02 February 2026 04:00:06 +0000 (0:00:00.577) 0:00:21.046 ******* 2026-02-02 04:00:08.485108 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:00:08.485156 | orchestrator | 2026-02-02 04:00:08.485175 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-02 04:00:08.485194 | orchestrator | Monday 02 February 2026 04:00:06 +0000 (0:00:00.664) 0:00:21.710 ******* 2026-02-02 04:00:08.485233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 04:00:08.485282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 04:00:09.346929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 04:00:09.347070 | orchestrator | 2026-02-02 04:00:09.347093 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-02 04:00:09.347110 | orchestrator | Monday 02 February 2026 04:00:08 +0000 (0:00:01.474) 0:00:23.185 ******* 2026-02-02 04:00:09.347231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 04:00:09.347263 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:00:09.347290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 04:00:09.347305 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:00:09.347405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 04:00:11.759907 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:00:11.760024 | orchestrator | 2026-02-02 04:00:11.760041 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-02 04:00:11.760054 | orchestrator | Monday 02 February 2026 04:00:09 +0000 (0:00:00.868) 0:00:24.053 ******* 2026-02-02 04:00:11.760090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 04:00:11.760176 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:00:11.760342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 04:00:11.760403 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:00:11.760450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 04:00:11.760472 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:00:11.760491 | orchestrator | 2026-02-02 04:00:11.760511 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-02 04:00:11.760574 | orchestrator | Monday 02 February 2026 04:00:10 +0000 (0:00:01.020) 0:00:25.074 ******* 2026-02-02 04:00:11.760626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 04:00:53.828312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 04:00:53.828461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 04:00:53.828477 | orchestrator | 2026-02-02 04:00:53.828491 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-02 04:00:53.828506 | orchestrator | Monday 02 February 2026 04:00:11 +0000 (0:00:01.398) 0:00:26.472 ******* 2026-02-02 04:00:53.828523 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:00:53.828536 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:00:53.828547 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:00:53.828558 | orchestrator | 2026-02-02 04:00:53.828569 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-02 04:00:53.828579 | orchestrator | Monday 02 February 2026 04:00:12 +0000 (0:00:00.587) 0:00:27.060 ******* 2026-02-02 04:00:53.828591 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:00:53.828602 | orchestrator | 2026-02-02 04:00:53.828613 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-02 04:00:53.828624 | orchestrator | Monday 02 February 2026 04:00:12 +0000 (0:00:00.614) 0:00:27.675 ******* 2026-02-02 04:00:53.828636 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:00:53.828647 | orchestrator | 2026-02-02 04:00:53.828658 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-02 04:00:53.828669 | orchestrator | Monday 02 February 2026 04:00:15 +0000 (0:00:02.146) 0:00:29.821 ******* 2026-02-02 04:00:53.828680 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:00:53.828691 | orchestrator | 2026-02-02 04:00:53.828701 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-02 04:00:53.828713 | orchestrator | Monday 02 February 2026 04:00:17 +0000 (0:00:02.065) 0:00:31.887 ******* 2026-02-02 04:00:53.828744 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:00:53.828767 | orchestrator | 2026-02-02 04:00:53.828791 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-02 04:00:53.828803 | orchestrator | Monday 02 February 2026 04:00:32 +0000 (0:00:15.343) 0:00:47.230 ******* 2026-02-02 04:00:53.828815 | orchestrator | 2026-02-02 04:00:53.828826 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-02 04:00:53.828838 | orchestrator | Monday 02 February 2026 04:00:32 +0000 (0:00:00.283) 0:00:47.513 ******* 2026-02-02 04:00:53.828850 | orchestrator | 2026-02-02 04:00:53.828862 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-02 04:00:53.828874 | orchestrator | Monday 02 February 2026 04:00:32 +0000 (0:00:00.082) 0:00:47.596 ******* 2026-02-02 04:00:53.828886 | orchestrator | 2026-02-02 04:00:53.828900 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-02 04:00:53.828912 | orchestrator | Monday 02 February 2026 04:00:32 +0000 (0:00:00.078) 0:00:47.674 ******* 2026-02-02 04:00:53.828924 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:00:53.828936 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:00:53.828947 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:00:53.828959 | orchestrator | 2026-02-02 04:00:53.828971 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:00:53.828983 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-02 04:00:53.828997 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-02 04:00:53.829009 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-02 04:00:53.829020 | orchestrator | 2026-02-02 04:00:53.829031 | orchestrator | 2026-02-02 04:00:53.829043 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:00:53.829054 | orchestrator | Monday 02 February 2026 04:00:53 +0000 (0:00:20.847) 0:01:08.522 ******* 2026-02-02 04:00:53.829066 | orchestrator | =============================================================================== 2026-02-02 04:00:53.829077 | orchestrator | horizon : Restart horizon container ------------------------------------ 20.85s 2026-02-02 04:00:53.829088 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.34s 2026-02-02 04:00:53.829100 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.15s 2026-02-02 04:00:53.829111 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.07s 2026-02-02 04:00:53.829122 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.86s 2026-02-02 04:00:53.829156 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.85s 2026-02-02 04:00:53.829164 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.59s 2026-02-02 04:00:53.829170 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.54s 2026-02-02 04:00:53.829177 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.47s 2026-02-02 04:00:53.829184 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.40s 2026-02-02 04:00:53.829191 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.15s 2026-02-02 04:00:53.829197 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.02s 2026-02-02 04:00:53.829204 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.87s 2026-02-02 04:00:53.829219 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.85s 2026-02-02 04:00:54.285161 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2026-02-02 04:00:54.285291 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2026-02-02 04:00:54.285319 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.59s 2026-02-02 04:00:54.285376 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-02-02 04:00:54.285388 | orchestrator | horizon : Copying over custom themes ------------------------------------ 0.58s 2026-02-02 04:00:54.285399 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-02-02 04:00:56.858279 | orchestrator | 2026-02-02 04:00:56 | INFO  | Task d150eac2-68a8-4b62-a0db-894b3ac10d47 (skyline) was prepared for execution. 2026-02-02 04:00:56.858369 | orchestrator | 2026-02-02 04:00:56 | INFO  | It takes a moment until task d150eac2-68a8-4b62-a0db-894b3ac10d47 (skyline) has been started and output is visible here. 2026-02-02 04:01:26.764524 | orchestrator | 2026-02-02 04:01:26.764666 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:01:26.764689 | orchestrator | 2026-02-02 04:01:26.764703 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:01:26.764720 | orchestrator | Monday 02 February 2026 04:01:01 +0000 (0:00:00.308) 0:00:00.308 ******* 2026-02-02 04:01:26.764735 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:01:26.764751 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:01:26.764767 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:01:26.764781 | orchestrator | 2026-02-02 04:01:26.764797 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:01:26.764811 | orchestrator | Monday 02 February 2026 04:01:01 +0000 (0:00:00.317) 0:00:00.625 ******* 2026-02-02 04:01:26.764827 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-02 04:01:26.764837 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-02 04:01:26.764846 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-02 04:01:26.764855 | orchestrator | 2026-02-02 04:01:26.764864 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-02 04:01:26.764873 | orchestrator | 2026-02-02 04:01:26.764881 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-02 04:01:26.764890 | orchestrator | Monday 02 February 2026 04:01:02 +0000 (0:00:00.470) 0:00:01.096 ******* 2026-02-02 04:01:26.764900 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:01:26.764909 | orchestrator | 2026-02-02 04:01:26.764918 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-02 04:01:26.764927 | orchestrator | Monday 02 February 2026 04:01:02 +0000 (0:00:00.583) 0:00:01.679 ******* 2026-02-02 04:01:26.764936 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-02 04:01:26.764944 | orchestrator | 2026-02-02 04:01:26.764953 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-02 04:01:26.764962 | orchestrator | Monday 02 February 2026 04:01:06 +0000 (0:00:03.267) 0:00:04.947 ******* 2026-02-02 04:01:26.764971 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-02 04:01:26.764980 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-02 04:01:26.764989 | orchestrator | 2026-02-02 04:01:26.764998 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-02 04:01:26.765006 | orchestrator | Monday 02 February 2026 04:01:12 +0000 (0:00:06.002) 0:00:10.949 ******* 2026-02-02 04:01:26.765015 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 04:01:26.765025 | orchestrator | 2026-02-02 04:01:26.765033 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-02 04:01:26.765043 | orchestrator | Monday 02 February 2026 04:01:15 +0000 (0:00:03.094) 0:00:14.044 ******* 2026-02-02 04:01:26.765052 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 04:01:26.765061 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-02 04:01:26.765070 | orchestrator | 2026-02-02 04:01:26.765081 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-02 04:01:26.765116 | orchestrator | Monday 02 February 2026 04:01:19 +0000 (0:00:03.918) 0:00:17.963 ******* 2026-02-02 04:01:26.765126 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 04:01:26.765163 | orchestrator | 2026-02-02 04:01:26.765175 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-02 04:01:26.765186 | orchestrator | Monday 02 February 2026 04:01:22 +0000 (0:00:03.062) 0:00:21.025 ******* 2026-02-02 04:01:26.765197 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-02 04:01:26.765207 | orchestrator | 2026-02-02 04:01:26.765232 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-02 04:01:26.765242 | orchestrator | Monday 02 February 2026 04:01:25 +0000 (0:00:03.281) 0:00:24.307 ******* 2026-02-02 04:01:26.765257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:26.765290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:26.765303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:26.765315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:26.765339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:26.765358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:30.746402 | orchestrator | 2026-02-02 04:01:30.746505 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-02 04:01:30.746521 | orchestrator | Monday 02 February 2026 04:01:26 +0000 (0:00:01.265) 0:00:25.573 ******* 2026-02-02 04:01:30.746531 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:01:30.746538 | orchestrator | 2026-02-02 04:01:30.746544 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-02 04:01:30.746551 | orchestrator | Monday 02 February 2026 04:01:27 +0000 (0:00:00.769) 0:00:26.342 ******* 2026-02-02 04:01:30.746560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:30.746589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:30.746608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:30.746630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:30.746638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:30.746646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:30.746658 | orchestrator | 2026-02-02 04:01:30.746664 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-02 04:01:30.746671 | orchestrator | Monday 02 February 2026 04:01:30 +0000 (0:00:02.514) 0:00:28.856 ******* 2026-02-02 04:01:30.746681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-02 04:01:30.746688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-02 04:01:30.746695 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:01:30.746707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-02 04:01:32.095700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-02 04:01:32.095794 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:01:32.095814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-02 04:01:32.095819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-02 04:01:32.095824 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:01:32.095829 | orchestrator | 2026-02-02 04:01:32.095834 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-02 04:01:32.095840 | orchestrator | Monday 02 February 2026 04:01:30 +0000 (0:00:00.697) 0:00:29.554 ******* 2026-02-02 04:01:32.095845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-02 04:01:32.095864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-02 04:01:32.095869 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:01:32.095877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-02 04:01:32.095885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-02 04:01:32.095892 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:01:32.095903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-02 04:01:32.095925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-02 04:01:40.680967 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:01:40.681095 | orchestrator | 2026-02-02 04:01:40.681118 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-02 04:01:40.681134 | orchestrator | Monday 02 February 2026 04:01:32 +0000 (0:00:01.348) 0:00:30.902 ******* 2026-02-02 04:01:40.681252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:40.681274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:40.681290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:40.681334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:40.681375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:40.681400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:40.681416 | orchestrator | 2026-02-02 04:01:40.681432 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-02 04:01:40.681447 | orchestrator | Monday 02 February 2026 04:01:34 +0000 (0:00:02.432) 0:00:33.335 ******* 2026-02-02 04:01:40.681463 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-02 04:01:40.681477 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-02 04:01:40.681491 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-02 04:01:40.681503 | orchestrator | 2026-02-02 04:01:40.681518 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-02 04:01:40.681533 | orchestrator | Monday 02 February 2026 04:01:36 +0000 (0:00:01.616) 0:00:34.951 ******* 2026-02-02 04:01:40.681548 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-02 04:01:40.681563 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-02 04:01:40.681590 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-02 04:01:40.681606 | orchestrator | 2026-02-02 04:01:40.681622 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-02 04:01:40.681637 | orchestrator | Monday 02 February 2026 04:01:38 +0000 (0:00:02.172) 0:00:37.124 ******* 2026-02-02 04:01:40.681652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:40.681679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:42.837486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:42.837577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:42.837603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:42.837611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:42.837619 | orchestrator | 2026-02-02 04:01:42.837628 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-02 04:01:42.837637 | orchestrator | Monday 02 February 2026 04:01:40 +0000 (0:00:02.366) 0:00:39.491 ******* 2026-02-02 04:01:42.837643 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:01:42.837651 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:01:42.837657 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:01:42.837663 | orchestrator | 2026-02-02 04:01:42.837684 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-02 04:01:42.837691 | orchestrator | Monday 02 February 2026 04:01:41 +0000 (0:00:00.366) 0:00:39.857 ******* 2026-02-02 04:01:42.837703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:42.837710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:42.837722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:42.837729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:01:42.837745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:02:10.645607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-02 04:02:10.645766 | orchestrator | 2026-02-02 04:02:10.645786 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-02 04:02:10.645800 | orchestrator | Monday 02 February 2026 04:01:42 +0000 (0:00:01.787) 0:00:41.645 ******* 2026-02-02 04:02:10.645812 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:02:10.645824 | orchestrator | 2026-02-02 04:02:10.645835 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-02 04:02:10.645846 | orchestrator | Monday 02 February 2026 04:01:44 +0000 (0:00:02.005) 0:00:43.650 ******* 2026-02-02 04:02:10.645857 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:02:10.645868 | orchestrator | 2026-02-02 04:02:10.645880 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-02 04:02:10.645891 | orchestrator | Monday 02 February 2026 04:01:46 +0000 (0:00:02.125) 0:00:45.775 ******* 2026-02-02 04:02:10.645902 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:02:10.645913 | orchestrator | 2026-02-02 04:02:10.645924 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-02 04:02:10.645956 | orchestrator | Monday 02 February 2026 04:01:54 +0000 (0:00:07.639) 0:00:53.415 ******* 2026-02-02 04:02:10.645967 | orchestrator | 2026-02-02 04:02:10.645978 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-02 04:02:10.645990 | orchestrator | Monday 02 February 2026 04:01:54 +0000 (0:00:00.070) 0:00:53.485 ******* 2026-02-02 04:02:10.646001 | orchestrator | 2026-02-02 04:02:10.646012 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-02 04:02:10.646084 | orchestrator | Monday 02 February 2026 04:01:54 +0000 (0:00:00.081) 0:00:53.566 ******* 2026-02-02 04:02:10.646096 | orchestrator | 2026-02-02 04:02:10.646108 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-02 04:02:10.646121 | orchestrator | Monday 02 February 2026 04:01:54 +0000 (0:00:00.082) 0:00:53.649 ******* 2026-02-02 04:02:10.646133 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:02:10.646145 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:02:10.646204 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:02:10.646219 | orchestrator | 2026-02-02 04:02:10.646232 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-02 04:02:10.646246 | orchestrator | Monday 02 February 2026 04:02:01 +0000 (0:00:06.245) 0:00:59.894 ******* 2026-02-02 04:02:10.646258 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:02:10.646271 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:02:10.646284 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:02:10.646296 | orchestrator | 2026-02-02 04:02:10.646309 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:02:10.646322 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 04:02:10.646338 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 04:02:10.646350 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 04:02:10.646363 | orchestrator | 2026-02-02 04:02:10.646376 | orchestrator | 2026-02-02 04:02:10.646389 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:02:10.646403 | orchestrator | Monday 02 February 2026 04:02:10 +0000 (0:00:09.066) 0:01:08.961 ******* 2026-02-02 04:02:10.646416 | orchestrator | =============================================================================== 2026-02-02 04:02:10.646439 | orchestrator | skyline : Restart skyline-console container ----------------------------- 9.07s 2026-02-02 04:02:10.646453 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.64s 2026-02-02 04:02:10.646467 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 6.25s 2026-02-02 04:02:10.646478 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.00s 2026-02-02 04:02:10.646503 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 3.92s 2026-02-02 04:02:10.646514 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.28s 2026-02-02 04:02:10.646525 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.27s 2026-02-02 04:02:10.646536 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.09s 2026-02-02 04:02:10.646565 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.06s 2026-02-02 04:02:10.646577 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.51s 2026-02-02 04:02:10.646588 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.43s 2026-02-02 04:02:10.646599 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.37s 2026-02-02 04:02:10.646610 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.17s 2026-02-02 04:02:10.646621 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.13s 2026-02-02 04:02:10.646632 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.01s 2026-02-02 04:02:10.646643 | orchestrator | skyline : Check skyline container --------------------------------------- 1.79s 2026-02-02 04:02:10.646653 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.62s 2026-02-02 04:02:10.646664 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.35s 2026-02-02 04:02:10.646675 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.27s 2026-02-02 04:02:10.646686 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.77s 2026-02-02 04:02:13.276962 | orchestrator | 2026-02-02 04:02:13 | INFO  | Task 55d7f1ab-2aab-4ece-bc89-62ae52d6a985 (glance) was prepared for execution. 2026-02-02 04:02:13.277058 | orchestrator | 2026-02-02 04:02:13 | INFO  | It takes a moment until task 55d7f1ab-2aab-4ece-bc89-62ae52d6a985 (glance) has been started and output is visible here. 2026-02-02 04:02:46.419211 | orchestrator | 2026-02-02 04:02:46.419344 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:02:46.419365 | orchestrator | 2026-02-02 04:02:46.419379 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:02:46.419393 | orchestrator | Monday 02 February 2026 04:02:17 +0000 (0:00:00.282) 0:00:00.282 ******* 2026-02-02 04:02:46.419407 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:02:46.419421 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:02:46.419433 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:02:46.419446 | orchestrator | 2026-02-02 04:02:46.419459 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:02:46.419475 | orchestrator | Monday 02 February 2026 04:02:18 +0000 (0:00:00.330) 0:00:00.612 ******* 2026-02-02 04:02:46.419489 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-02 04:02:46.419504 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-02 04:02:46.419518 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-02 04:02:46.419533 | orchestrator | 2026-02-02 04:02:46.419547 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-02 04:02:46.419561 | orchestrator | 2026-02-02 04:02:46.419576 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-02 04:02:46.419590 | orchestrator | Monday 02 February 2026 04:02:18 +0000 (0:00:00.452) 0:00:01.065 ******* 2026-02-02 04:02:46.419634 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:02:46.419650 | orchestrator | 2026-02-02 04:02:46.419665 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-02 04:02:46.419680 | orchestrator | Monday 02 February 2026 04:02:19 +0000 (0:00:00.586) 0:00:01.652 ******* 2026-02-02 04:02:46.419695 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-02 04:02:46.419710 | orchestrator | 2026-02-02 04:02:46.419724 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-02 04:02:46.419739 | orchestrator | Monday 02 February 2026 04:02:22 +0000 (0:00:03.220) 0:00:04.873 ******* 2026-02-02 04:02:46.419753 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-02 04:02:46.419769 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-02 04:02:46.419784 | orchestrator | 2026-02-02 04:02:46.419798 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-02 04:02:46.419811 | orchestrator | Monday 02 February 2026 04:02:28 +0000 (0:00:06.081) 0:00:10.955 ******* 2026-02-02 04:02:46.419825 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 04:02:46.419840 | orchestrator | 2026-02-02 04:02:46.419853 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-02 04:02:46.419866 | orchestrator | Monday 02 February 2026 04:02:31 +0000 (0:00:03.079) 0:00:14.034 ******* 2026-02-02 04:02:46.419879 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 04:02:46.419893 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-02 04:02:46.419907 | orchestrator | 2026-02-02 04:02:46.419919 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-02 04:02:46.419932 | orchestrator | Monday 02 February 2026 04:02:35 +0000 (0:00:03.748) 0:00:17.783 ******* 2026-02-02 04:02:46.419945 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 04:02:46.419959 | orchestrator | 2026-02-02 04:02:46.419973 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-02 04:02:46.419985 | orchestrator | Monday 02 February 2026 04:02:38 +0000 (0:00:03.086) 0:00:20.869 ******* 2026-02-02 04:02:46.420015 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-02 04:02:46.420030 | orchestrator | 2026-02-02 04:02:46.420043 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-02 04:02:46.420057 | orchestrator | Monday 02 February 2026 04:02:41 +0000 (0:00:03.540) 0:00:24.410 ******* 2026-02-02 04:02:46.420104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 04:02:46.420136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 04:02:46.420159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 04:02:46.420203 | orchestrator | 2026-02-02 04:02:46.420217 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-02 04:02:46.420230 | orchestrator | Monday 02 February 2026 04:02:45 +0000 (0:00:03.661) 0:00:28.072 ******* 2026-02-02 04:02:46.420244 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:02:46.420266 | orchestrator | 2026-02-02 04:02:46.420291 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-02 04:03:02.785888 | orchestrator | Monday 02 February 2026 04:02:46 +0000 (0:00:00.795) 0:00:28.867 ******* 2026-02-02 04:03:02.786115 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:03:02.786144 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:03:02.786156 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:03:02.786199 | orchestrator | 2026-02-02 04:03:02.786213 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-02 04:03:02.786225 | orchestrator | Monday 02 February 2026 04:02:50 +0000 (0:00:04.161) 0:00:33.029 ******* 2026-02-02 04:03:02.786237 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-02 04:03:02.786250 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-02 04:03:02.786261 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-02 04:03:02.786272 | orchestrator | 2026-02-02 04:03:02.786283 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-02 04:03:02.786294 | orchestrator | Monday 02 February 2026 04:02:52 +0000 (0:00:01.510) 0:00:34.539 ******* 2026-02-02 04:03:02.786305 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-02 04:03:02.786316 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-02 04:03:02.786327 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-02 04:03:02.786338 | orchestrator | 2026-02-02 04:03:02.786349 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-02 04:03:02.786360 | orchestrator | Monday 02 February 2026 04:02:53 +0000 (0:00:01.345) 0:00:35.884 ******* 2026-02-02 04:03:02.786372 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:03:02.786385 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:03:02.786404 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:03:02.786423 | orchestrator | 2026-02-02 04:03:02.786441 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-02 04:03:02.786460 | orchestrator | Monday 02 February 2026 04:02:54 +0000 (0:00:00.653) 0:00:36.538 ******* 2026-02-02 04:03:02.786477 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:03:02.786495 | orchestrator | 2026-02-02 04:03:02.786515 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-02 04:03:02.786534 | orchestrator | Monday 02 February 2026 04:02:54 +0000 (0:00:00.145) 0:00:36.683 ******* 2026-02-02 04:03:02.786553 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:03:02.786571 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:03:02.786591 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:03:02.786609 | orchestrator | 2026-02-02 04:03:02.786629 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-02 04:03:02.786646 | orchestrator | Monday 02 February 2026 04:02:54 +0000 (0:00:00.338) 0:00:37.021 ******* 2026-02-02 04:03:02.786661 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:03:02.786675 | orchestrator | 2026-02-02 04:03:02.786689 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-02 04:03:02.786702 | orchestrator | Monday 02 February 2026 04:02:55 +0000 (0:00:00.803) 0:00:37.824 ******* 2026-02-02 04:03:02.786740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 04:03:02.786807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 04:03:02.786829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 04:03:02.786850 | orchestrator | 2026-02-02 04:03:02.786862 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-02 04:03:02.786873 | orchestrator | Monday 02 February 2026 04:02:59 +0000 (0:00:04.242) 0:00:42.066 ******* 2026-02-02 04:03:02.786894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 04:03:07.081770 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:03:07.081892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 04:03:07.081941 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:03:07.081958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 04:03:07.081970 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:03:07.081982 | orchestrator | 2026-02-02 04:03:07.081995 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-02 04:03:07.082009 | orchestrator | Monday 02 February 2026 04:03:02 +0000 (0:00:03.166) 0:00:45.232 ******* 2026-02-02 04:03:07.082301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 04:03:07.082326 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:03:07.082345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 04:03:07.082355 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:03:07.082373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 04:03:46.015571 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:03:46.015714 | orchestrator | 2026-02-02 04:03:46.015735 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-02 04:03:46.015791 | orchestrator | Monday 02 February 2026 04:03:07 +0000 (0:00:04.300) 0:00:49.533 ******* 2026-02-02 04:03:46.015807 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:03:46.015847 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:03:46.015913 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:03:46.015923 | orchestrator | 2026-02-02 04:03:46.015931 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-02 04:03:46.015938 | orchestrator | Monday 02 February 2026 04:03:10 +0000 (0:00:03.888) 0:00:53.422 ******* 2026-02-02 04:03:46.015963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 04:03:46.015975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 04:03:46.016007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 04:03:46.016025 | orchestrator | 2026-02-02 04:03:46.016034 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-02 04:03:46.016046 | orchestrator | Monday 02 February 2026 04:03:15 +0000 (0:00:04.056) 0:00:57.478 ******* 2026-02-02 04:03:46.016062 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:03:46.016080 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:03:46.016091 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:03:46.016103 | orchestrator | 2026-02-02 04:03:46.016115 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-02 04:03:46.016127 | orchestrator | Monday 02 February 2026 04:03:21 +0000 (0:00:06.324) 0:01:03.803 ******* 2026-02-02 04:03:46.016138 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:03:46.016148 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:03:46.016159 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:03:46.016170 | orchestrator | 2026-02-02 04:03:46.016209 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-02 04:03:46.016221 | orchestrator | Monday 02 February 2026 04:03:25 +0000 (0:00:03.863) 0:01:07.666 ******* 2026-02-02 04:03:46.016233 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:03:46.016246 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:03:46.016257 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:03:46.016270 | orchestrator | 2026-02-02 04:03:46.016281 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-02 04:03:46.016288 | orchestrator | Monday 02 February 2026 04:03:29 +0000 (0:00:04.167) 0:01:11.834 ******* 2026-02-02 04:03:46.016296 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:03:46.016303 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:03:46.016310 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:03:46.016318 | orchestrator | 2026-02-02 04:03:46.016325 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-02 04:03:46.016332 | orchestrator | Monday 02 February 2026 04:03:33 +0000 (0:00:03.692) 0:01:15.526 ******* 2026-02-02 04:03:46.016340 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:03:46.016347 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:03:46.016354 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:03:46.016362 | orchestrator | 2026-02-02 04:03:46.016369 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-02 04:03:46.016376 | orchestrator | Monday 02 February 2026 04:03:37 +0000 (0:00:04.014) 0:01:19.541 ******* 2026-02-02 04:03:46.016389 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:03:46.016399 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:03:46.016422 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:03:46.016440 | orchestrator | 2026-02-02 04:03:46.016452 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-02 04:03:46.016463 | orchestrator | Monday 02 February 2026 04:03:37 +0000 (0:00:00.575) 0:01:20.117 ******* 2026-02-02 04:03:46.016475 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-02 04:03:46.016488 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:03:46.016501 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-02 04:03:46.016513 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:03:46.016526 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-02 04:03:46.016538 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:03:46.016550 | orchestrator | 2026-02-02 04:03:46.016562 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-02 04:03:46.016575 | orchestrator | Monday 02 February 2026 04:03:41 +0000 (0:00:03.708) 0:01:23.826 ******* 2026-02-02 04:03:46.016631 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:03:46.016645 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:03:46.016657 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:03:46.016669 | orchestrator | 2026-02-02 04:03:46.016681 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-02 04:03:46.016706 | orchestrator | Monday 02 February 2026 04:03:45 +0000 (0:00:04.635) 0:01:28.461 ******* 2026-02-02 04:04:58.208775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 04:04:58.208886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 04:04:58.208975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 04:04:58.208986 | orchestrator | 2026-02-02 04:04:58.208993 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-02 04:04:58.209000 | orchestrator | Monday 02 February 2026 04:03:50 +0000 (0:00:04.180) 0:01:32.641 ******* 2026-02-02 04:04:58.209005 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:04:58.209011 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:04:58.209016 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:04:58.209021 | orchestrator | 2026-02-02 04:04:58.209027 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-02 04:04:58.209032 | orchestrator | Monday 02 February 2026 04:03:50 +0000 (0:00:00.557) 0:01:33.199 ******* 2026-02-02 04:04:58.209037 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:04:58.209042 | orchestrator | 2026-02-02 04:04:58.209047 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-02 04:04:58.209053 | orchestrator | Monday 02 February 2026 04:03:52 +0000 (0:00:02.039) 0:01:35.239 ******* 2026-02-02 04:04:58.209058 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:04:58.209063 | orchestrator | 2026-02-02 04:04:58.209069 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-02 04:04:58.209074 | orchestrator | Monday 02 February 2026 04:03:54 +0000 (0:00:02.137) 0:01:37.376 ******* 2026-02-02 04:04:58.209079 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:04:58.209090 | orchestrator | 2026-02-02 04:04:58.209095 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-02 04:04:58.209100 | orchestrator | Monday 02 February 2026 04:03:56 +0000 (0:00:01.981) 0:01:39.358 ******* 2026-02-02 04:04:58.209105 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:04:58.209110 | orchestrator | 2026-02-02 04:04:58.209115 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-02 04:04:58.209121 | orchestrator | Monday 02 February 2026 04:04:23 +0000 (0:00:26.741) 0:02:06.099 ******* 2026-02-02 04:04:58.209126 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:04:58.209131 | orchestrator | 2026-02-02 04:04:58.209140 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-02 04:04:58.209149 | orchestrator | Monday 02 February 2026 04:04:25 +0000 (0:00:02.021) 0:02:08.121 ******* 2026-02-02 04:04:58.209157 | orchestrator | 2026-02-02 04:04:58.209165 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-02 04:04:58.209174 | orchestrator | Monday 02 February 2026 04:04:25 +0000 (0:00:00.074) 0:02:08.195 ******* 2026-02-02 04:04:58.209182 | orchestrator | 2026-02-02 04:04:58.209208 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-02 04:04:58.209217 | orchestrator | Monday 02 February 2026 04:04:25 +0000 (0:00:00.073) 0:02:08.269 ******* 2026-02-02 04:04:58.209225 | orchestrator | 2026-02-02 04:04:58.209234 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-02 04:04:58.209243 | orchestrator | Monday 02 February 2026 04:04:25 +0000 (0:00:00.077) 0:02:08.346 ******* 2026-02-02 04:04:58.209251 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:04:58.209258 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:04:58.209264 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:04:58.209269 | orchestrator | 2026-02-02 04:04:58.209274 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:04:58.209280 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-02 04:04:58.209287 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-02 04:04:58.209292 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-02 04:04:58.209297 | orchestrator | 2026-02-02 04:04:58.209303 | orchestrator | 2026-02-02 04:04:58.209310 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:04:58.209316 | orchestrator | Monday 02 February 2026 04:04:58 +0000 (0:00:32.303) 0:02:40.650 ******* 2026-02-02 04:04:58.209322 | orchestrator | =============================================================================== 2026-02-02 04:04:58.209328 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.30s 2026-02-02 04:04:58.209335 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.74s 2026-02-02 04:04:58.209341 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.32s 2026-02-02 04:04:58.209353 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.08s 2026-02-02 04:04:58.590302 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.64s 2026-02-02 04:04:58.590387 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.30s 2026-02-02 04:04:58.590396 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.24s 2026-02-02 04:04:58.590403 | orchestrator | glance : Check glance containers ---------------------------------------- 4.18s 2026-02-02 04:04:58.590410 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.17s 2026-02-02 04:04:58.590417 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.16s 2026-02-02 04:04:58.590437 | orchestrator | glance : Copying over config.json files for services -------------------- 4.06s 2026-02-02 04:04:58.590459 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.01s 2026-02-02 04:04:58.590466 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.89s 2026-02-02 04:04:58.590472 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.86s 2026-02-02 04:04:58.590479 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.75s 2026-02-02 04:04:58.590486 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.71s 2026-02-02 04:04:58.590492 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.69s 2026-02-02 04:04:58.590499 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.66s 2026-02-02 04:04:58.590505 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.54s 2026-02-02 04:04:58.590511 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.22s 2026-02-02 04:05:01.071092 | orchestrator | 2026-02-02 04:05:01 | INFO  | Task 65028607-f410-49a8-b449-1236df7ab7f9 (cinder) was prepared for execution. 2026-02-02 04:05:01.071182 | orchestrator | 2026-02-02 04:05:01 | INFO  | It takes a moment until task 65028607-f410-49a8-b449-1236df7ab7f9 (cinder) has been started and output is visible here. 2026-02-02 04:05:34.872389 | orchestrator | 2026-02-02 04:05:34.872508 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:05:34.872526 | orchestrator | 2026-02-02 04:05:34.872539 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:05:34.872551 | orchestrator | Monday 02 February 2026 04:05:05 +0000 (0:00:00.270) 0:00:00.270 ******* 2026-02-02 04:05:34.872562 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:05:34.872575 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:05:34.872586 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:05:34.872597 | orchestrator | 2026-02-02 04:05:34.872608 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:05:34.872620 | orchestrator | Monday 02 February 2026 04:05:05 +0000 (0:00:00.322) 0:00:00.592 ******* 2026-02-02 04:05:34.872631 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-02 04:05:34.872643 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-02 04:05:34.872654 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-02 04:05:34.872665 | orchestrator | 2026-02-02 04:05:34.872676 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-02 04:05:34.872687 | orchestrator | 2026-02-02 04:05:34.872698 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-02 04:05:34.872710 | orchestrator | Monday 02 February 2026 04:05:06 +0000 (0:00:00.484) 0:00:01.076 ******* 2026-02-02 04:05:34.872721 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:05:34.872733 | orchestrator | 2026-02-02 04:05:34.872744 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-02 04:05:34.872755 | orchestrator | Monday 02 February 2026 04:05:07 +0000 (0:00:00.560) 0:00:01.637 ******* 2026-02-02 04:05:34.872767 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-02 04:05:34.872778 | orchestrator | 2026-02-02 04:05:34.872790 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-02 04:05:34.872802 | orchestrator | Monday 02 February 2026 04:05:10 +0000 (0:00:03.003) 0:00:04.640 ******* 2026-02-02 04:05:34.872814 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-02 04:05:34.872825 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-02 04:05:34.872836 | orchestrator | 2026-02-02 04:05:34.872848 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-02 04:05:34.872883 | orchestrator | Monday 02 February 2026 04:05:16 +0000 (0:00:06.144) 0:00:10.785 ******* 2026-02-02 04:05:34.872895 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 04:05:34.872906 | orchestrator | 2026-02-02 04:05:34.872918 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-02 04:05:34.872932 | orchestrator | Monday 02 February 2026 04:05:19 +0000 (0:00:03.005) 0:00:13.790 ******* 2026-02-02 04:05:34.872945 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 04:05:34.872959 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-02 04:05:34.872972 | orchestrator | 2026-02-02 04:05:34.872985 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-02 04:05:34.872998 | orchestrator | Monday 02 February 2026 04:05:22 +0000 (0:00:03.755) 0:00:17.545 ******* 2026-02-02 04:05:34.873011 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 04:05:34.873025 | orchestrator | 2026-02-02 04:05:34.873038 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-02 04:05:34.873051 | orchestrator | Monday 02 February 2026 04:05:25 +0000 (0:00:03.055) 0:00:20.601 ******* 2026-02-02 04:05:34.873064 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-02 04:05:34.873077 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-02 04:05:34.873089 | orchestrator | 2026-02-02 04:05:34.873101 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-02 04:05:34.873114 | orchestrator | Monday 02 February 2026 04:05:32 +0000 (0:00:06.858) 0:00:27.459 ******* 2026-02-02 04:05:34.873147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:05:34.873250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:05:34.873269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:05:34.873293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:34.873306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:34.873324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:34.873337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:34.873357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:40.771776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:40.771875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:40.771883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:40.771900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:40.771906 | orchestrator | 2026-02-02 04:05:40.771913 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-02 04:05:40.771919 | orchestrator | Monday 02 February 2026 04:05:34 +0000 (0:00:02.119) 0:00:29.578 ******* 2026-02-02 04:05:40.771924 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:05:40.771930 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:05:40.771935 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:05:40.771940 | orchestrator | 2026-02-02 04:05:40.771945 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-02 04:05:40.771949 | orchestrator | Monday 02 February 2026 04:05:35 +0000 (0:00:00.296) 0:00:29.874 ******* 2026-02-02 04:05:40.771955 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:05:40.771960 | orchestrator | 2026-02-02 04:05:40.771965 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-02 04:05:40.771970 | orchestrator | Monday 02 February 2026 04:05:36 +0000 (0:00:00.812) 0:00:30.687 ******* 2026-02-02 04:05:40.771975 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-02 04:05:40.771981 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-02 04:05:40.771986 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-02 04:05:40.771991 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-02 04:05:40.772000 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-02 04:05:40.772005 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-02 04:05:40.772010 | orchestrator | 2026-02-02 04:05:40.772015 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-02 04:05:40.772020 | orchestrator | Monday 02 February 2026 04:05:37 +0000 (0:00:01.643) 0:00:32.330 ******* 2026-02-02 04:05:40.772037 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-02 04:05:40.772045 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-02 04:05:40.772054 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-02 04:05:40.772059 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-02 04:05:40.772068 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-02 04:05:51.429585 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-02 04:05:51.429682 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-02 04:05:51.429710 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-02 04:05:51.429719 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-02 04:05:51.429727 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-02 04:05:51.429768 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-02 04:05:51.429776 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-02 04:05:51.429783 | orchestrator | 2026-02-02 04:05:51.429791 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-02 04:05:51.429799 | orchestrator | Monday 02 February 2026 04:05:41 +0000 (0:00:03.456) 0:00:35.787 ******* 2026-02-02 04:05:51.429805 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-02 04:05:51.429813 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-02 04:05:51.429819 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-02 04:05:51.429825 | orchestrator | 2026-02-02 04:05:51.429831 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-02 04:05:51.429837 | orchestrator | Monday 02 February 2026 04:05:42 +0000 (0:00:01.445) 0:00:37.233 ******* 2026-02-02 04:05:51.429844 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-02 04:05:51.429851 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-02 04:05:51.429857 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-02 04:05:51.429863 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-02 04:05:51.429870 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-02 04:05:51.429882 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-02 04:05:51.429889 | orchestrator | 2026-02-02 04:05:51.429896 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-02 04:05:51.429902 | orchestrator | Monday 02 February 2026 04:05:45 +0000 (0:00:02.694) 0:00:39.928 ******* 2026-02-02 04:05:51.429921 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-02 04:05:51.429935 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-02 04:05:51.429951 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-02 04:05:51.429957 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-02 04:05:51.429964 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-02 04:05:51.429970 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-02 04:05:51.429974 | orchestrator | 2026-02-02 04:05:51.429978 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-02 04:05:51.429982 | orchestrator | Monday 02 February 2026 04:05:46 +0000 (0:00:01.005) 0:00:40.933 ******* 2026-02-02 04:05:51.429986 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:05:51.429990 | orchestrator | 2026-02-02 04:05:51.429994 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-02 04:05:51.429998 | orchestrator | Monday 02 February 2026 04:05:46 +0000 (0:00:00.122) 0:00:41.056 ******* 2026-02-02 04:05:51.430002 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:05:51.430006 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:05:51.430010 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:05:51.430049 | orchestrator | 2026-02-02 04:05:51.430053 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-02 04:05:51.430057 | orchestrator | Monday 02 February 2026 04:05:46 +0000 (0:00:00.547) 0:00:41.604 ******* 2026-02-02 04:05:51.430062 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:05:51.430067 | orchestrator | 2026-02-02 04:05:51.430071 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-02 04:05:51.430075 | orchestrator | Monday 02 February 2026 04:05:47 +0000 (0:00:00.587) 0:00:42.191 ******* 2026-02-02 04:05:51.430086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:05:52.325676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:05:52.325838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:05:52.325891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:52.325906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:52.325918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:52.325956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:52.325988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:52.326086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:52.326123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:52.326144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:52.326165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:52.326238 | orchestrator | 2026-02-02 04:05:52.326264 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-02 04:05:52.326287 | orchestrator | Monday 02 February 2026 04:05:51 +0000 (0:00:03.962) 0:00:46.154 ******* 2026-02-02 04:05:52.326319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-02 04:05:52.428841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.428988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.429016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.429031 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:05:52.429049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-02 04:05:52.429064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.429103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.429138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.429149 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:05:52.429158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-02 04:05:52.429168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.429177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.429257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.429273 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:05:52.429282 | orchestrator | 2026-02-02 04:05:52.429293 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-02 04:05:52.429310 | orchestrator | Monday 02 February 2026 04:05:52 +0000 (0:00:00.895) 0:00:47.050 ******* 2026-02-02 04:05:52.986093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-02 04:05:52.986233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.986252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.986265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.986277 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:05:52.986291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-02 04:05:52.986344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.986364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.986377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.986388 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:05:52.986400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-02 04:05:52.986412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:05:52.986438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 04:05:57.229904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 04:05:57.230163 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:05:57.230239 | orchestrator | 2026-02-02 04:05:57.230270 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-02 04:05:57.230286 | orchestrator | Monday 02 February 2026 04:05:53 +0000 (0:00:00.872) 0:00:47.922 ******* 2026-02-02 04:05:57.230299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:05:57.230313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:05:57.230325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:05:57.230377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:57.230391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:57.230409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:57.230426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:57.230446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:57.230461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:05:57.230490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:10.290380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:10.290499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:10.290517 | orchestrator | 2026-02-02 04:06:10.290532 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-02 04:06:10.290545 | orchestrator | Monday 02 February 2026 04:05:57 +0000 (0:00:04.024) 0:00:51.947 ******* 2026-02-02 04:06:10.290556 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-02 04:06:10.290569 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-02 04:06:10.290579 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-02 04:06:10.290590 | orchestrator | 2026-02-02 04:06:10.290607 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-02 04:06:10.290626 | orchestrator | Monday 02 February 2026 04:05:59 +0000 (0:00:01.876) 0:00:53.823 ******* 2026-02-02 04:06:10.290646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:06:10.290702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:06:10.290762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:06:10.290785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:10.290806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:10.290820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:10.290842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:10.290856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:10.290880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:12.507417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:12.507506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:12.507517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:12.507542 | orchestrator | 2026-02-02 04:06:12.507551 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-02 04:06:12.507560 | orchestrator | Monday 02 February 2026 04:06:10 +0000 (0:00:11.184) 0:01:05.007 ******* 2026-02-02 04:06:12.507567 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:06:12.507574 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:06:12.507581 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:06:12.507588 | orchestrator | 2026-02-02 04:06:12.507595 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-02 04:06:12.507602 | orchestrator | Monday 02 February 2026 04:06:11 +0000 (0:00:01.535) 0:01:06.543 ******* 2026-02-02 04:06:12.507610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-02 04:06:12.507619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:06:12.507644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 04:06:12.507653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 04:06:12.507666 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:06:12.507674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-02 04:06:12.507681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:06:12.507688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 04:06:12.507705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 04:06:16.017146 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:06:16.017318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-02 04:06:16.017372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:06:16.017386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 04:06:16.017397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 04:06:16.017407 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:06:16.017417 | orchestrator | 2026-02-02 04:06:16.017428 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-02 04:06:16.017438 | orchestrator | Monday 02 February 2026 04:06:12 +0000 (0:00:00.681) 0:01:07.224 ******* 2026-02-02 04:06:16.017447 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:06:16.017456 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:06:16.017464 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:06:16.017473 | orchestrator | 2026-02-02 04:06:16.017482 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-02 04:06:16.017491 | orchestrator | Monday 02 February 2026 04:06:13 +0000 (0:00:00.593) 0:01:07.818 ******* 2026-02-02 04:06:16.017531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:06:16.017551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:06:16.017561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-02 04:06:16.017571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:16.017580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:16.017594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:06:16.017611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:07:41.686555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:07:41.686677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-02 04:07:41.686695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:07:41.686707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:07:41.686735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-02 04:07:41.686789 | orchestrator | 2026-02-02 04:07:41.686803 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-02 04:07:41.686814 | orchestrator | Monday 02 February 2026 04:06:16 +0000 (0:00:02.934) 0:01:10.753 ******* 2026-02-02 04:07:41.686825 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:07:41.686836 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:07:41.686846 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:07:41.686856 | orchestrator | 2026-02-02 04:07:41.686865 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-02 04:07:41.686875 | orchestrator | Monday 02 February 2026 04:06:16 +0000 (0:00:00.315) 0:01:11.068 ******* 2026-02-02 04:07:41.686885 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:07:41.686894 | orchestrator | 2026-02-02 04:07:41.686923 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-02 04:07:41.686933 | orchestrator | Monday 02 February 2026 04:06:18 +0000 (0:00:02.031) 0:01:13.100 ******* 2026-02-02 04:07:41.686943 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:07:41.686953 | orchestrator | 2026-02-02 04:07:41.686963 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-02 04:07:41.686972 | orchestrator | Monday 02 February 2026 04:06:20 +0000 (0:00:02.161) 0:01:15.261 ******* 2026-02-02 04:07:41.686982 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:07:41.686991 | orchestrator | 2026-02-02 04:07:41.687000 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-02 04:07:41.687010 | orchestrator | Monday 02 February 2026 04:06:38 +0000 (0:00:17.856) 0:01:33.118 ******* 2026-02-02 04:07:41.687019 | orchestrator | 2026-02-02 04:07:41.687029 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-02 04:07:41.687038 | orchestrator | Monday 02 February 2026 04:06:38 +0000 (0:00:00.302) 0:01:33.420 ******* 2026-02-02 04:07:41.687047 | orchestrator | 2026-02-02 04:07:41.687057 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-02 04:07:41.687067 | orchestrator | Monday 02 February 2026 04:06:38 +0000 (0:00:00.073) 0:01:33.493 ******* 2026-02-02 04:07:41.687076 | orchestrator | 2026-02-02 04:07:41.687086 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-02 04:07:41.687094 | orchestrator | Monday 02 February 2026 04:06:38 +0000 (0:00:00.092) 0:01:33.586 ******* 2026-02-02 04:07:41.687100 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:07:41.687106 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:07:41.687111 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:07:41.687117 | orchestrator | 2026-02-02 04:07:41.687123 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-02 04:07:41.687129 | orchestrator | Monday 02 February 2026 04:07:05 +0000 (0:00:26.893) 0:02:00.479 ******* 2026-02-02 04:07:41.687135 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:07:41.687141 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:07:41.687147 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:07:41.687152 | orchestrator | 2026-02-02 04:07:41.687158 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-02 04:07:41.687164 | orchestrator | Monday 02 February 2026 04:07:13 +0000 (0:00:08.116) 0:02:08.595 ******* 2026-02-02 04:07:41.687192 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:07:41.687202 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:07:41.687211 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:07:41.687220 | orchestrator | 2026-02-02 04:07:41.687230 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-02 04:07:41.687238 | orchestrator | Monday 02 February 2026 04:07:35 +0000 (0:00:21.545) 0:02:30.141 ******* 2026-02-02 04:07:41.687247 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:07:41.687257 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:07:41.687267 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:07:41.687286 | orchestrator | 2026-02-02 04:07:41.687296 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-02 04:07:41.687303 | orchestrator | Monday 02 February 2026 04:07:41 +0000 (0:00:05.886) 0:02:36.028 ******* 2026-02-02 04:07:41.687309 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:07:41.687315 | orchestrator | 2026-02-02 04:07:41.687321 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:07:41.687328 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-02 04:07:41.687335 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 04:07:41.687341 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 04:07:41.687347 | orchestrator | 2026-02-02 04:07:41.687353 | orchestrator | 2026-02-02 04:07:41.687369 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:07:41.687375 | orchestrator | Monday 02 February 2026 04:07:41 +0000 (0:00:00.264) 0:02:36.293 ******* 2026-02-02 04:07:41.687381 | orchestrator | =============================================================================== 2026-02-02 04:07:41.687387 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 26.89s 2026-02-02 04:07:41.687393 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 21.55s 2026-02-02 04:07:41.687399 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.86s 2026-02-02 04:07:41.687405 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.18s 2026-02-02 04:07:41.687416 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.12s 2026-02-02 04:07:41.687422 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.86s 2026-02-02 04:07:41.687428 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.14s 2026-02-02 04:07:41.687434 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.89s 2026-02-02 04:07:41.687440 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.02s 2026-02-02 04:07:41.687446 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.96s 2026-02-02 04:07:41.687452 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.76s 2026-02-02 04:07:41.687457 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.46s 2026-02-02 04:07:41.687463 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.06s 2026-02-02 04:07:41.687469 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.01s 2026-02-02 04:07:41.687481 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.00s 2026-02-02 04:07:42.080248 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.93s 2026-02-02 04:07:42.080340 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.69s 2026-02-02 04:07:42.080402 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.16s 2026-02-02 04:07:42.080411 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.12s 2026-02-02 04:07:42.080418 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.03s 2026-02-02 04:07:44.596735 | orchestrator | 2026-02-02 04:07:44 | INFO  | Task 6725f67e-8aaa-4800-be6c-51c95f8693a9 (barbican) was prepared for execution. 2026-02-02 04:07:44.596850 | orchestrator | 2026-02-02 04:07:44 | INFO  | It takes a moment until task 6725f67e-8aaa-4800-be6c-51c95f8693a9 (barbican) has been started and output is visible here. 2026-02-02 04:08:25.592391 | orchestrator | 2026-02-02 04:08:25.592518 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:08:25.592559 | orchestrator | 2026-02-02 04:08:25.592573 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:08:25.592585 | orchestrator | Monday 02 February 2026 04:07:49 +0000 (0:00:00.275) 0:00:00.275 ******* 2026-02-02 04:08:25.592596 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:08:25.592609 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:08:25.592620 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:08:25.592632 | orchestrator | 2026-02-02 04:08:25.592644 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:08:25.592655 | orchestrator | Monday 02 February 2026 04:07:49 +0000 (0:00:00.305) 0:00:00.581 ******* 2026-02-02 04:08:25.592666 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-02 04:08:25.592678 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-02 04:08:25.592689 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-02 04:08:25.592700 | orchestrator | 2026-02-02 04:08:25.592712 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-02 04:08:25.592723 | orchestrator | 2026-02-02 04:08:25.592733 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-02 04:08:25.592745 | orchestrator | Monday 02 February 2026 04:07:49 +0000 (0:00:00.504) 0:00:01.086 ******* 2026-02-02 04:08:25.592756 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:08:25.592768 | orchestrator | 2026-02-02 04:08:25.592779 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-02 04:08:25.592790 | orchestrator | Monday 02 February 2026 04:07:50 +0000 (0:00:00.589) 0:00:01.675 ******* 2026-02-02 04:08:25.592815 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-02 04:08:25.592827 | orchestrator | 2026-02-02 04:08:25.592838 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-02 04:08:25.592849 | orchestrator | Monday 02 February 2026 04:07:53 +0000 (0:00:03.048) 0:00:04.724 ******* 2026-02-02 04:08:25.592860 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-02 04:08:25.592871 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-02 04:08:25.592885 | orchestrator | 2026-02-02 04:08:25.592898 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-02 04:08:25.592910 | orchestrator | Monday 02 February 2026 04:07:59 +0000 (0:00:05.591) 0:00:10.316 ******* 2026-02-02 04:08:25.592924 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 04:08:25.592937 | orchestrator | 2026-02-02 04:08:25.592950 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-02 04:08:25.592964 | orchestrator | Monday 02 February 2026 04:08:02 +0000 (0:00:03.085) 0:00:13.401 ******* 2026-02-02 04:08:25.592977 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 04:08:25.592990 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-02 04:08:25.593002 | orchestrator | 2026-02-02 04:08:25.593016 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-02 04:08:25.593039 | orchestrator | Monday 02 February 2026 04:08:06 +0000 (0:00:03.980) 0:00:17.381 ******* 2026-02-02 04:08:25.593051 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 04:08:25.593063 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-02 04:08:25.593082 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-02 04:08:25.593118 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-02 04:08:25.593138 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-02 04:08:25.593159 | orchestrator | 2026-02-02 04:08:25.593202 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-02 04:08:25.593214 | orchestrator | Monday 02 February 2026 04:08:20 +0000 (0:00:14.306) 0:00:31.688 ******* 2026-02-02 04:08:25.593236 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-02 04:08:25.593247 | orchestrator | 2026-02-02 04:08:25.593258 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-02 04:08:25.593269 | orchestrator | Monday 02 February 2026 04:08:23 +0000 (0:00:03.462) 0:00:35.150 ******* 2026-02-02 04:08:25.593284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:25.593320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:25.593333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:25.593346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:25.593366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:25.593385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:25.593406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:31.747952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:31.748061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:31.748078 | orchestrator | 2026-02-02 04:08:31.748091 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-02 04:08:31.748105 | orchestrator | Monday 02 February 2026 04:08:25 +0000 (0:00:01.644) 0:00:36.795 ******* 2026-02-02 04:08:31.748126 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-02 04:08:31.748149 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-02 04:08:31.748329 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-02 04:08:31.748352 | orchestrator | 2026-02-02 04:08:31.748370 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-02 04:08:31.748388 | orchestrator | Monday 02 February 2026 04:08:26 +0000 (0:00:01.190) 0:00:37.985 ******* 2026-02-02 04:08:31.748405 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:08:31.748424 | orchestrator | 2026-02-02 04:08:31.748442 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-02 04:08:31.748492 | orchestrator | Monday 02 February 2026 04:08:27 +0000 (0:00:00.358) 0:00:38.344 ******* 2026-02-02 04:08:31.748515 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:08:31.748535 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:08:31.748554 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:08:31.748570 | orchestrator | 2026-02-02 04:08:31.748584 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-02 04:08:31.748597 | orchestrator | Monday 02 February 2026 04:08:27 +0000 (0:00:00.317) 0:00:38.661 ******* 2026-02-02 04:08:31.748626 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:08:31.748640 | orchestrator | 2026-02-02 04:08:31.748652 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-02 04:08:31.748666 | orchestrator | Monday 02 February 2026 04:08:28 +0000 (0:00:00.560) 0:00:39.222 ******* 2026-02-02 04:08:31.748683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:31.748737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:31.748765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:31.748785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:31.748845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:31.748862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:31.748874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:31.748896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:33.254447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:33.254559 | orchestrator | 2026-02-02 04:08:33.254577 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-02 04:08:33.254591 | orchestrator | Monday 02 February 2026 04:08:31 +0000 (0:00:03.724) 0:00:42.946 ******* 2026-02-02 04:08:33.254680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-02 04:08:33.254713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:08:33.254735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:08:33.254756 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:08:33.254777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-02 04:08:33.254824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:08:33.254847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:08:33.254871 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:08:33.254888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-02 04:08:33.254901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:08:33.254912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:08:33.254924 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:08:33.254935 | orchestrator | 2026-02-02 04:08:33.254946 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-02 04:08:33.254959 | orchestrator | Monday 02 February 2026 04:08:32 +0000 (0:00:00.695) 0:00:43.642 ******* 2026-02-02 04:08:33.255008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-02 04:08:36.662753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:08:36.662855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:08:36.662868 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:08:36.662895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-02 04:08:36.662905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:08:36.662914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:08:36.662922 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:08:36.662948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-02 04:08:36.662978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:08:36.662991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:08:36.663000 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:08:36.663008 | orchestrator | 2026-02-02 04:08:36.663017 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-02 04:08:36.663028 | orchestrator | Monday 02 February 2026 04:08:33 +0000 (0:00:00.822) 0:00:44.464 ******* 2026-02-02 04:08:36.663037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:36.663046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:36.663066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:46.372914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:46.373107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:46.373139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:46.373190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:46.373213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:46.373265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:46.373288 | orchestrator | 2026-02-02 04:08:46.373310 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-02 04:08:46.373332 | orchestrator | Monday 02 February 2026 04:08:36 +0000 (0:00:03.401) 0:00:47.866 ******* 2026-02-02 04:08:46.373352 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:08:46.373372 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:08:46.373390 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:08:46.373409 | orchestrator | 2026-02-02 04:08:46.373453 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-02 04:08:46.373474 | orchestrator | Monday 02 February 2026 04:08:38 +0000 (0:00:01.496) 0:00:49.362 ******* 2026-02-02 04:08:46.373517 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:08:46.373536 | orchestrator | 2026-02-02 04:08:46.373556 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-02 04:08:46.373576 | orchestrator | Monday 02 February 2026 04:08:39 +0000 (0:00:00.980) 0:00:50.343 ******* 2026-02-02 04:08:46.373596 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:08:46.373615 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:08:46.373634 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:08:46.373653 | orchestrator | 2026-02-02 04:08:46.373672 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-02 04:08:46.373693 | orchestrator | Monday 02 February 2026 04:08:39 +0000 (0:00:00.599) 0:00:50.943 ******* 2026-02-02 04:08:46.373755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:46.373780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:46.373815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:46.373848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:47.267081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:47.267235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:47.267255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:47.267286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:47.267296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:47.267304 | orchestrator | 2026-02-02 04:08:47.267316 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-02 04:08:47.267323 | orchestrator | Monday 02 February 2026 04:08:46 +0000 (0:00:06.633) 0:00:57.576 ******* 2026-02-02 04:08:47.267340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-02 04:08:47.267351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:08:47.267357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:08:47.267363 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:08:47.267370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-02 04:08:47.267384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:08:47.267391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:08:47.267399 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:08:47.267421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-02 04:08:49.631294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:08:49.631498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:08:49.631546 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:08:49.631562 | orchestrator | 2026-02-02 04:08:49.631578 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-02 04:08:49.631638 | orchestrator | Monday 02 February 2026 04:08:47 +0000 (0:00:00.896) 0:00:58.473 ******* 2026-02-02 04:08:49.631656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:49.631672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:49.631727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-02 04:08:49.631827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:49.631859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:49.631874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:49.631887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:49.631902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:49.631917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:08:49.631930 | orchestrator | 2026-02-02 04:08:49.631944 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-02 04:08:49.631967 | orchestrator | Monday 02 February 2026 04:08:49 +0000 (0:00:02.358) 0:01:00.832 ******* 2026-02-02 04:09:30.452607 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:09:30.452777 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:09:30.452792 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:09:30.452800 | orchestrator | 2026-02-02 04:09:30.452824 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-02 04:09:30.452852 | orchestrator | Monday 02 February 2026 04:08:49 +0000 (0:00:00.313) 0:01:01.145 ******* 2026-02-02 04:09:30.452861 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:09:30.452868 | orchestrator | 2026-02-02 04:09:30.452875 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-02 04:09:30.452882 | orchestrator | Monday 02 February 2026 04:08:51 +0000 (0:00:01.980) 0:01:03.125 ******* 2026-02-02 04:09:30.452889 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:09:30.452896 | orchestrator | 2026-02-02 04:09:30.452904 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-02 04:09:30.452912 | orchestrator | Monday 02 February 2026 04:08:54 +0000 (0:00:02.113) 0:01:05.239 ******* 2026-02-02 04:09:30.452920 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:09:30.452927 | orchestrator | 2026-02-02 04:09:30.452934 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-02 04:09:30.452942 | orchestrator | Monday 02 February 2026 04:09:05 +0000 (0:00:11.426) 0:01:16.666 ******* 2026-02-02 04:09:30.452950 | orchestrator | 2026-02-02 04:09:30.452957 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-02 04:09:30.452964 | orchestrator | Monday 02 February 2026 04:09:05 +0000 (0:00:00.302) 0:01:16.968 ******* 2026-02-02 04:09:30.452971 | orchestrator | 2026-02-02 04:09:30.452978 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-02 04:09:30.452985 | orchestrator | Monday 02 February 2026 04:09:05 +0000 (0:00:00.073) 0:01:17.042 ******* 2026-02-02 04:09:30.452992 | orchestrator | 2026-02-02 04:09:30.452999 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-02 04:09:30.453006 | orchestrator | Monday 02 February 2026 04:09:05 +0000 (0:00:00.075) 0:01:17.117 ******* 2026-02-02 04:09:30.453013 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:09:30.453051 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:09:30.453059 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:09:30.453065 | orchestrator | 2026-02-02 04:09:30.453072 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-02 04:09:30.453079 | orchestrator | Monday 02 February 2026 04:09:13 +0000 (0:00:07.693) 0:01:24.810 ******* 2026-02-02 04:09:30.453086 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:09:30.453103 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:09:30.453112 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:09:30.453119 | orchestrator | 2026-02-02 04:09:30.453126 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-02 04:09:30.453134 | orchestrator | Monday 02 February 2026 04:09:21 +0000 (0:00:07.959) 0:01:32.770 ******* 2026-02-02 04:09:30.453141 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:09:30.453148 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:09:30.453155 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:09:30.453213 | orchestrator | 2026-02-02 04:09:30.453222 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:09:30.453231 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 04:09:30.453241 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 04:09:30.453248 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 04:09:30.453255 | orchestrator | 2026-02-02 04:09:30.453262 | orchestrator | 2026-02-02 04:09:30.453270 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:09:30.453276 | orchestrator | Monday 02 February 2026 04:09:30 +0000 (0:00:08.467) 0:01:41.238 ******* 2026-02-02 04:09:30.453284 | orchestrator | =============================================================================== 2026-02-02 04:09:30.453291 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.31s 2026-02-02 04:09:30.453329 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.43s 2026-02-02 04:09:30.453337 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.47s 2026-02-02 04:09:30.453344 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.96s 2026-02-02 04:09:30.453350 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.69s 2026-02-02 04:09:30.453356 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.63s 2026-02-02 04:09:30.453364 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 5.59s 2026-02-02 04:09:30.453402 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.98s 2026-02-02 04:09:30.453412 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.72s 2026-02-02 04:09:30.453420 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.46s 2026-02-02 04:09:30.453427 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.40s 2026-02-02 04:09:30.453437 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.09s 2026-02-02 04:09:30.453444 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.05s 2026-02-02 04:09:30.453451 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.36s 2026-02-02 04:09:30.453460 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.11s 2026-02-02 04:09:30.453488 | orchestrator | barbican : Creating barbican database ----------------------------------- 1.98s 2026-02-02 04:09:30.453496 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.64s 2026-02-02 04:09:30.453513 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.50s 2026-02-02 04:09:30.453521 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.19s 2026-02-02 04:09:30.453528 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.98s 2026-02-02 04:09:32.959502 | orchestrator | 2026-02-02 04:09:32 | INFO  | Task b475d12b-915c-44c1-8619-4d2b090ae6f3 (designate) was prepared for execution. 2026-02-02 04:09:32.959587 | orchestrator | 2026-02-02 04:09:32 | INFO  | It takes a moment until task b475d12b-915c-44c1-8619-4d2b090ae6f3 (designate) has been started and output is visible here. 2026-02-02 04:10:04.020644 | orchestrator | 2026-02-02 04:10:04.020715 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:10:04.020723 | orchestrator | 2026-02-02 04:10:04.020728 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:10:04.020732 | orchestrator | Monday 02 February 2026 04:09:37 +0000 (0:00:00.273) 0:00:00.273 ******* 2026-02-02 04:10:04.020736 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:10:04.020742 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:10:04.020746 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:10:04.020750 | orchestrator | 2026-02-02 04:10:04.020754 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:10:04.020758 | orchestrator | Monday 02 February 2026 04:09:37 +0000 (0:00:00.311) 0:00:00.584 ******* 2026-02-02 04:10:04.020763 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-02 04:10:04.020768 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-02 04:10:04.020772 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-02 04:10:04.020776 | orchestrator | 2026-02-02 04:10:04.020780 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-02 04:10:04.020784 | orchestrator | 2026-02-02 04:10:04.020787 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-02 04:10:04.020791 | orchestrator | Monday 02 February 2026 04:09:38 +0000 (0:00:00.458) 0:00:01.043 ******* 2026-02-02 04:10:04.020796 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:10:04.020815 | orchestrator | 2026-02-02 04:10:04.020820 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-02 04:10:04.020824 | orchestrator | Monday 02 February 2026 04:09:38 +0000 (0:00:00.622) 0:00:01.666 ******* 2026-02-02 04:10:04.020827 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-02 04:10:04.020831 | orchestrator | 2026-02-02 04:10:04.020836 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-02 04:10:04.020842 | orchestrator | Monday 02 February 2026 04:09:42 +0000 (0:00:03.371) 0:00:05.037 ******* 2026-02-02 04:10:04.020848 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-02 04:10:04.020856 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-02 04:10:04.020863 | orchestrator | 2026-02-02 04:10:04.020869 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-02 04:10:04.020875 | orchestrator | Monday 02 February 2026 04:09:48 +0000 (0:00:06.094) 0:00:11.132 ******* 2026-02-02 04:10:04.020881 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 04:10:04.020888 | orchestrator | 2026-02-02 04:10:04.020894 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-02 04:10:04.020902 | orchestrator | Monday 02 February 2026 04:09:51 +0000 (0:00:03.135) 0:00:14.267 ******* 2026-02-02 04:10:04.020906 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 04:10:04.020910 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-02 04:10:04.020914 | orchestrator | 2026-02-02 04:10:04.020918 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-02 04:10:04.020921 | orchestrator | Monday 02 February 2026 04:09:55 +0000 (0:00:03.793) 0:00:18.060 ******* 2026-02-02 04:10:04.020925 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 04:10:04.020930 | orchestrator | 2026-02-02 04:10:04.020934 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-02 04:10:04.020938 | orchestrator | Monday 02 February 2026 04:09:58 +0000 (0:00:03.060) 0:00:21.121 ******* 2026-02-02 04:10:04.020941 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-02 04:10:04.020945 | orchestrator | 2026-02-02 04:10:04.020949 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-02 04:10:04.020953 | orchestrator | Monday 02 February 2026 04:10:02 +0000 (0:00:03.598) 0:00:24.719 ******* 2026-02-02 04:10:04.020967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:04.020986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:04.020995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:04.021001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:04.021007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:04.021011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:04.021018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:04.021027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:10.122712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:10.122824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:10.122843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:10.122855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:10.122867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:10.122895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:10.122949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:10.122963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:10.122975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:10.122986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:10.122998 | orchestrator | 2026-02-02 04:10:10.123012 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-02 04:10:10.123025 | orchestrator | Monday 02 February 2026 04:10:04 +0000 (0:00:02.760) 0:00:27.480 ******* 2026-02-02 04:10:10.123037 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:10:10.123049 | orchestrator | 2026-02-02 04:10:10.123060 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-02 04:10:10.123073 | orchestrator | Monday 02 February 2026 04:10:04 +0000 (0:00:00.123) 0:00:27.603 ******* 2026-02-02 04:10:10.123091 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:10:10.123110 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:10:10.123127 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:10:10.123145 | orchestrator | 2026-02-02 04:10:10.123195 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-02 04:10:10.123214 | orchestrator | Monday 02 February 2026 04:10:05 +0000 (0:00:00.541) 0:00:28.144 ******* 2026-02-02 04:10:10.123234 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:10:10.123253 | orchestrator | 2026-02-02 04:10:10.123272 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-02 04:10:10.123305 | orchestrator | Monday 02 February 2026 04:10:06 +0000 (0:00:00.593) 0:00:28.738 ******* 2026-02-02 04:10:10.123337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:10.123376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:11.862636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:11.862759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:11.862776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:11.862823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:11.862837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:11.862867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:11.862879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:11.862891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:11.862904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:11.862915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:11.862939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:11.862951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:11.862970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:12.716520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:12.716635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:12.716659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:12.716708 | orchestrator | 2026-02-02 04:10:12.716730 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-02 04:10:12.716749 | orchestrator | Monday 02 February 2026 04:10:11 +0000 (0:00:05.785) 0:00:34.524 ******* 2026-02-02 04:10:12.716785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:12.716806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 04:10:12.716847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:12.716867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:12.716884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:12.716902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:10:12.716933 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:10:12.716959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:12.716980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 04:10:12.717008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:13.461637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.461738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 04:10:13.461777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.461800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.461809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.461820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.461845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.461855 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:10:13.461868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.461883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.461892 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:10:13.461901 | orchestrator | 2026-02-02 04:10:13.461911 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-02 04:10:13.461923 | orchestrator | Monday 02 February 2026 04:10:12 +0000 (0:00:00.982) 0:00:35.507 ******* 2026-02-02 04:10:13.461937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:13.461947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 04:10:13.461956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.461972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.791606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.791699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.791712 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:10:13.791739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:13.791749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 04:10:13.791756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.791764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.791804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.791813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.791821 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:10:13.791832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:13.791839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 04:10:13.791846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.791853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:13.791873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:17.776692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:10:17.776823 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:10:17.776847 | orchestrator | 2026-02-02 04:10:17.776867 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-02 04:10:17.776887 | orchestrator | Monday 02 February 2026 04:10:13 +0000 (0:00:00.947) 0:00:36.454 ******* 2026-02-02 04:10:17.776928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:17.776952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:17.776973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:17.777048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:17.777073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:17.777104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:17.777124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:17.777146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:17.777188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:17.777218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:17.777243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:28.847542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:28.847653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:28.847666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:28.847674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:28.847701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:28.847710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:28.847731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:28.847739 | orchestrator | 2026-02-02 04:10:28.847748 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-02 04:10:28.847757 | orchestrator | Monday 02 February 2026 04:10:19 +0000 (0:00:05.684) 0:00:42.139 ******* 2026-02-02 04:10:28.847769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:28.847779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:28.847794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:28.847803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:28.847818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:36.861645 | orchestrator | 2026-02-02 04:10:36.861658 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-02 04:10:36.861671 | orchestrator | Monday 02 February 2026 04:10:33 +0000 (0:00:13.839) 0:00:55.978 ******* 2026-02-02 04:10:36.861690 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-02 04:10:41.031003 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-02 04:10:41.031113 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-02 04:10:41.031131 | orchestrator | 2026-02-02 04:10:41.031147 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-02 04:10:41.031232 | orchestrator | Monday 02 February 2026 04:10:36 +0000 (0:00:03.545) 0:00:59.524 ******* 2026-02-02 04:10:41.031252 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-02 04:10:41.031271 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-02 04:10:41.031291 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-02 04:10:41.031311 | orchestrator | 2026-02-02 04:10:41.031330 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-02 04:10:41.031362 | orchestrator | Monday 02 February 2026 04:10:39 +0000 (0:00:02.418) 0:01:01.943 ******* 2026-02-02 04:10:41.031378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:41.031416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:41.031429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:41.031460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:41.031473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:41.031491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:41.031512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:41.031523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:41.031535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:41.031548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:41.031571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:43.754851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:43.754972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:43.754987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:43.754997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:43.755006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:43.755016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:43.755041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:43.755058 | orchestrator | 2026-02-02 04:10:43.755068 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-02 04:10:43.755079 | orchestrator | Monday 02 February 2026 04:10:42 +0000 (0:00:02.816) 0:01:04.760 ******* 2026-02-02 04:10:43.755093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:43.755104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:43.755114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:43.755123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:43.755137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:44.697715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:44.697813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:44.697829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:44.697841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:44.697853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:44.697863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:44.697909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:44.697927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:44.697938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:44.697948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:44.697958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:44.697970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:44.697980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:44.697998 | orchestrator | 2026-02-02 04:10:44.698010 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-02 04:10:44.698083 | orchestrator | Monday 02 February 2026 04:10:44 +0000 (0:00:02.600) 0:01:07.360 ******* 2026-02-02 04:10:45.598290 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:10:45.598391 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:10:45.598405 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:10:45.598417 | orchestrator | 2026-02-02 04:10:45.598428 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-02 04:10:45.598440 | orchestrator | Monday 02 February 2026 04:10:44 +0000 (0:00:00.295) 0:01:07.655 ******* 2026-02-02 04:10:45.598469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:45.598485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 04:10:45.598496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:45.598508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:45.598520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:45.598569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:10:45.598581 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:10:45.598597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:45.598608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 04:10:45.598618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:45.598628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:45.598645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:45.598662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:10:48.697028 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:10:48.697153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-02 04:10:48.697210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 04:10:48.697225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 04:10:48.697237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 04:10:48.697273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 04:10:48.697285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:10:48.697297 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:10:48.697309 | orchestrator | 2026-02-02 04:10:48.697338 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-02 04:10:48.697352 | orchestrator | Monday 02 February 2026 04:10:45 +0000 (0:00:00.720) 0:01:08.376 ******* 2026-02-02 04:10:48.697370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:48.697383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:48.697395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-02 04:10:48.697418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:48.697436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:10:50.409976 | orchestrator | 2026-02-02 04:10:50.409989 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-02 04:10:50.410003 | orchestrator | Monday 02 February 2026 04:10:49 +0000 (0:00:04.176) 0:01:12.552 ******* 2026-02-02 04:10:50.410074 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:10:50.410098 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:12:21.897263 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:12:21.897380 | orchestrator | 2026-02-02 04:12:21.897397 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-02 04:12:21.897426 | orchestrator | Monday 02 February 2026 04:10:50 +0000 (0:00:00.524) 0:01:13.077 ******* 2026-02-02 04:12:21.897438 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-02 04:12:21.897450 | orchestrator | 2026-02-02 04:12:21.897461 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-02 04:12:21.897472 | orchestrator | Monday 02 February 2026 04:10:52 +0000 (0:00:01.998) 0:01:15.075 ******* 2026-02-02 04:12:21.897484 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-02 04:12:21.897496 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-02 04:12:21.897507 | orchestrator | 2026-02-02 04:12:21.897518 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-02 04:12:21.897529 | orchestrator | Monday 02 February 2026 04:10:54 +0000 (0:00:02.184) 0:01:17.260 ******* 2026-02-02 04:12:21.897540 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:12:21.897550 | orchestrator | 2026-02-02 04:12:21.897561 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-02 04:12:21.897572 | orchestrator | Monday 02 February 2026 04:11:09 +0000 (0:00:15.145) 0:01:32.405 ******* 2026-02-02 04:12:21.897584 | orchestrator | 2026-02-02 04:12:21.897595 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-02 04:12:21.897605 | orchestrator | Monday 02 February 2026 04:11:09 +0000 (0:00:00.069) 0:01:32.474 ******* 2026-02-02 04:12:21.897617 | orchestrator | 2026-02-02 04:12:21.897649 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-02 04:12:21.897660 | orchestrator | Monday 02 February 2026 04:11:09 +0000 (0:00:00.069) 0:01:32.544 ******* 2026-02-02 04:12:21.897671 | orchestrator | 2026-02-02 04:12:21.897682 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-02 04:12:21.897693 | orchestrator | Monday 02 February 2026 04:11:09 +0000 (0:00:00.072) 0:01:32.616 ******* 2026-02-02 04:12:21.897705 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:12:21.897716 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:12:21.897727 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:12:21.897738 | orchestrator | 2026-02-02 04:12:21.897749 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-02 04:12:21.897760 | orchestrator | Monday 02 February 2026 04:11:22 +0000 (0:00:12.175) 0:01:44.792 ******* 2026-02-02 04:12:21.897773 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:12:21.897785 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:12:21.897797 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:12:21.897809 | orchestrator | 2026-02-02 04:12:21.897822 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-02 04:12:21.897835 | orchestrator | Monday 02 February 2026 04:11:32 +0000 (0:00:10.736) 0:01:55.529 ******* 2026-02-02 04:12:21.897847 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:12:21.897860 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:12:21.897872 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:12:21.897884 | orchestrator | 2026-02-02 04:12:21.897897 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-02 04:12:21.897910 | orchestrator | Monday 02 February 2026 04:11:43 +0000 (0:00:10.397) 0:02:05.926 ******* 2026-02-02 04:12:21.897922 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:12:21.897936 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:12:21.897948 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:12:21.897961 | orchestrator | 2026-02-02 04:12:21.897995 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-02 04:12:21.898009 | orchestrator | Monday 02 February 2026 04:11:53 +0000 (0:00:10.429) 0:02:16.356 ******* 2026-02-02 04:12:21.898080 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:12:21.898093 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:12:21.898105 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:12:21.898118 | orchestrator | 2026-02-02 04:12:21.898129 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-02 04:12:21.898140 | orchestrator | Monday 02 February 2026 04:12:03 +0000 (0:00:10.297) 0:02:26.654 ******* 2026-02-02 04:12:21.898151 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:12:21.898191 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:12:21.898202 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:12:21.898213 | orchestrator | 2026-02-02 04:12:21.898224 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-02 04:12:21.898235 | orchestrator | Monday 02 February 2026 04:12:14 +0000 (0:00:10.834) 0:02:37.489 ******* 2026-02-02 04:12:21.898246 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:12:21.898257 | orchestrator | 2026-02-02 04:12:21.898268 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:12:21.898280 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 04:12:21.898293 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 04:12:21.898304 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 04:12:21.898315 | orchestrator | 2026-02-02 04:12:21.898326 | orchestrator | 2026-02-02 04:12:21.898337 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:12:21.898357 | orchestrator | Monday 02 February 2026 04:12:21 +0000 (0:00:06.708) 0:02:44.197 ******* 2026-02-02 04:12:21.898368 | orchestrator | =============================================================================== 2026-02-02 04:12:21.898379 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.15s 2026-02-02 04:12:21.898390 | orchestrator | designate : Copying over designate.conf -------------------------------- 13.84s 2026-02-02 04:12:21.898418 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.18s 2026-02-02 04:12:21.898430 | orchestrator | designate : Restart designate-worker container ------------------------- 10.83s 2026-02-02 04:12:21.898447 | orchestrator | designate : Restart designate-api container ---------------------------- 10.74s 2026-02-02 04:12:21.898460 | orchestrator | designate : Restart designate-producer container ----------------------- 10.43s 2026-02-02 04:12:21.898471 | orchestrator | designate : Restart designate-central container ------------------------ 10.40s 2026-02-02 04:12:21.898482 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.30s 2026-02-02 04:12:21.898493 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.71s 2026-02-02 04:12:21.898503 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.09s 2026-02-02 04:12:21.898514 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.79s 2026-02-02 04:12:21.898525 | orchestrator | designate : Copying over config.json files for services ----------------- 5.68s 2026-02-02 04:12:21.898536 | orchestrator | designate : Check designate containers ---------------------------------- 4.18s 2026-02-02 04:12:21.898547 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.79s 2026-02-02 04:12:21.898558 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.60s 2026-02-02 04:12:21.898568 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.55s 2026-02-02 04:12:21.898579 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.37s 2026-02-02 04:12:21.898590 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.14s 2026-02-02 04:12:21.898601 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.06s 2026-02-02 04:12:21.898611 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.82s 2026-02-02 04:12:24.225554 | orchestrator | 2026-02-02 04:12:24 | INFO  | Task f43dc98d-bd09-4fd0-b388-80f27772ead1 (octavia) was prepared for execution. 2026-02-02 04:12:24.225631 | orchestrator | 2026-02-02 04:12:24 | INFO  | It takes a moment until task f43dc98d-bd09-4fd0-b388-80f27772ead1 (octavia) has been started and output is visible here. 2026-02-02 04:14:22.603291 | orchestrator | 2026-02-02 04:14:22.603413 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:14:22.603430 | orchestrator | 2026-02-02 04:14:22.603443 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:14:22.603454 | orchestrator | Monday 02 February 2026 04:12:28 +0000 (0:00:00.257) 0:00:00.257 ******* 2026-02-02 04:14:22.603466 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:14:22.603478 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:14:22.603489 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:14:22.603500 | orchestrator | 2026-02-02 04:14:22.603511 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:14:22.603522 | orchestrator | Monday 02 February 2026 04:12:28 +0000 (0:00:00.314) 0:00:00.572 ******* 2026-02-02 04:14:22.603534 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-02 04:14:22.603545 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-02 04:14:22.603556 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-02 04:14:22.603567 | orchestrator | 2026-02-02 04:14:22.603579 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-02 04:14:22.603590 | orchestrator | 2026-02-02 04:14:22.603601 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-02 04:14:22.603634 | orchestrator | Monday 02 February 2026 04:12:29 +0000 (0:00:00.433) 0:00:01.005 ******* 2026-02-02 04:14:22.603647 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:14:22.603658 | orchestrator | 2026-02-02 04:14:22.603669 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-02 04:14:22.603680 | orchestrator | Monday 02 February 2026 04:12:29 +0000 (0:00:00.571) 0:00:01.577 ******* 2026-02-02 04:14:22.603691 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-02 04:14:22.603702 | orchestrator | 2026-02-02 04:14:22.603713 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-02 04:14:22.603724 | orchestrator | Monday 02 February 2026 04:12:33 +0000 (0:00:03.250) 0:00:04.827 ******* 2026-02-02 04:14:22.603735 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-02 04:14:22.603746 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-02 04:14:22.603757 | orchestrator | 2026-02-02 04:14:22.603767 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-02 04:14:22.603778 | orchestrator | Monday 02 February 2026 04:12:38 +0000 (0:00:05.962) 0:00:10.790 ******* 2026-02-02 04:14:22.603789 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 04:14:22.603802 | orchestrator | 2026-02-02 04:14:22.603814 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-02 04:14:22.603827 | orchestrator | Monday 02 February 2026 04:12:41 +0000 (0:00:02.971) 0:00:13.762 ******* 2026-02-02 04:14:22.603840 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 04:14:22.603853 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-02 04:14:22.603867 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-02 04:14:22.603881 | orchestrator | 2026-02-02 04:14:22.603893 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-02 04:14:22.603906 | orchestrator | Monday 02 February 2026 04:12:49 +0000 (0:00:07.898) 0:00:21.660 ******* 2026-02-02 04:14:22.603919 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 04:14:22.603932 | orchestrator | 2026-02-02 04:14:22.603945 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-02 04:14:22.603972 | orchestrator | Monday 02 February 2026 04:12:52 +0000 (0:00:02.966) 0:00:24.626 ******* 2026-02-02 04:14:22.603984 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-02 04:14:22.604011 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-02 04:14:22.604024 | orchestrator | 2026-02-02 04:14:22.604037 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-02 04:14:22.604050 | orchestrator | Monday 02 February 2026 04:12:59 +0000 (0:00:06.959) 0:00:31.586 ******* 2026-02-02 04:14:22.604062 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-02 04:14:22.604075 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-02 04:14:22.604087 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-02 04:14:22.604100 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-02 04:14:22.604113 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-02 04:14:22.604125 | orchestrator | 2026-02-02 04:14:22.604138 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-02 04:14:22.604151 | orchestrator | Monday 02 February 2026 04:13:14 +0000 (0:00:14.717) 0:00:46.304 ******* 2026-02-02 04:14:22.604182 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:14:22.604194 | orchestrator | 2026-02-02 04:14:22.604205 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-02 04:14:22.604224 | orchestrator | Monday 02 February 2026 04:13:15 +0000 (0:00:00.758) 0:00:47.063 ******* 2026-02-02 04:14:22.604235 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:14:22.604246 | orchestrator | 2026-02-02 04:14:22.604257 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-02 04:14:22.604268 | orchestrator | Monday 02 February 2026 04:13:19 +0000 (0:00:04.692) 0:00:51.755 ******* 2026-02-02 04:14:22.604279 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:14:22.604290 | orchestrator | 2026-02-02 04:14:22.604301 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-02 04:14:22.604329 | orchestrator | Monday 02 February 2026 04:13:24 +0000 (0:00:04.396) 0:00:56.151 ******* 2026-02-02 04:14:22.604341 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:14:22.604352 | orchestrator | 2026-02-02 04:14:22.604363 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-02 04:14:22.604374 | orchestrator | Monday 02 February 2026 04:13:27 +0000 (0:00:03.045) 0:00:59.197 ******* 2026-02-02 04:14:22.604385 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-02 04:14:22.604396 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-02 04:14:22.604407 | orchestrator | 2026-02-02 04:14:22.604418 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-02 04:14:22.604429 | orchestrator | Monday 02 February 2026 04:13:36 +0000 (0:00:08.997) 0:01:08.195 ******* 2026-02-02 04:14:22.604440 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-02 04:14:22.604451 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-02 04:14:22.604463 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-02 04:14:22.604475 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-02 04:14:22.604486 | orchestrator | 2026-02-02 04:14:22.604497 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-02 04:14:22.604508 | orchestrator | Monday 02 February 2026 04:13:51 +0000 (0:00:15.288) 0:01:23.483 ******* 2026-02-02 04:14:22.604523 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:14:22.604535 | orchestrator | 2026-02-02 04:14:22.604546 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-02 04:14:22.604557 | orchestrator | Monday 02 February 2026 04:13:56 +0000 (0:00:04.345) 0:01:27.829 ******* 2026-02-02 04:14:22.604568 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:14:22.604579 | orchestrator | 2026-02-02 04:14:22.604589 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-02 04:14:22.604600 | orchestrator | Monday 02 February 2026 04:14:01 +0000 (0:00:04.979) 0:01:32.808 ******* 2026-02-02 04:14:22.604611 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:14:22.604622 | orchestrator | 2026-02-02 04:14:22.604633 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-02 04:14:22.604644 | orchestrator | Monday 02 February 2026 04:14:01 +0000 (0:00:00.206) 0:01:33.014 ******* 2026-02-02 04:14:22.604655 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:14:22.604666 | orchestrator | 2026-02-02 04:14:22.604678 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-02 04:14:22.604689 | orchestrator | Monday 02 February 2026 04:14:05 +0000 (0:00:04.062) 0:01:37.076 ******* 2026-02-02 04:14:22.604700 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:14:22.604711 | orchestrator | 2026-02-02 04:14:22.604722 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-02 04:14:22.604733 | orchestrator | Monday 02 February 2026 04:14:06 +0000 (0:00:01.066) 0:01:38.143 ******* 2026-02-02 04:14:22.604750 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:14:22.604761 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:14:22.604772 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:14:22.604783 | orchestrator | 2026-02-02 04:14:22.604794 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-02 04:14:22.604811 | orchestrator | Monday 02 February 2026 04:14:11 +0000 (0:00:05.178) 0:01:43.322 ******* 2026-02-02 04:14:22.604822 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:14:22.604834 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:14:22.604844 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:14:22.604855 | orchestrator | 2026-02-02 04:14:22.604866 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-02 04:14:22.604877 | orchestrator | Monday 02 February 2026 04:14:15 +0000 (0:00:03.871) 0:01:47.194 ******* 2026-02-02 04:14:22.604888 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:14:22.604899 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:14:22.604910 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:14:22.604921 | orchestrator | 2026-02-02 04:14:22.604932 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-02 04:14:22.604943 | orchestrator | Monday 02 February 2026 04:14:16 +0000 (0:00:00.967) 0:01:48.161 ******* 2026-02-02 04:14:22.604954 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:14:22.604965 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:14:22.604976 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:14:22.604987 | orchestrator | 2026-02-02 04:14:22.604998 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-02 04:14:22.605009 | orchestrator | Monday 02 February 2026 04:14:18 +0000 (0:00:01.688) 0:01:49.849 ******* 2026-02-02 04:14:22.605020 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:14:22.605031 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:14:22.605042 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:14:22.605053 | orchestrator | 2026-02-02 04:14:22.605064 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-02 04:14:22.605075 | orchestrator | Monday 02 February 2026 04:14:19 +0000 (0:00:01.287) 0:01:51.137 ******* 2026-02-02 04:14:22.605086 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:14:22.605097 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:14:22.605108 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:14:22.605119 | orchestrator | 2026-02-02 04:14:22.605129 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-02 04:14:22.605140 | orchestrator | Monday 02 February 2026 04:14:20 +0000 (0:00:01.126) 0:01:52.263 ******* 2026-02-02 04:14:22.605152 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:14:22.605184 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:14:22.605195 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:14:22.605206 | orchestrator | 2026-02-02 04:14:22.605224 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-02 04:14:48.273649 | orchestrator | Monday 02 February 2026 04:14:22 +0000 (0:00:02.131) 0:01:54.395 ******* 2026-02-02 04:14:48.273787 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:14:48.273814 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:14:48.273830 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:14:48.273845 | orchestrator | 2026-02-02 04:14:48.273860 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-02 04:14:48.273875 | orchestrator | Monday 02 February 2026 04:14:24 +0000 (0:00:01.455) 0:01:55.850 ******* 2026-02-02 04:14:48.273890 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:14:48.273907 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:14:48.273920 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:14:48.273935 | orchestrator | 2026-02-02 04:14:48.273950 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-02 04:14:48.273966 | orchestrator | Monday 02 February 2026 04:14:25 +0000 (0:00:01.611) 0:01:57.461 ******* 2026-02-02 04:14:48.273982 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:14:48.274097 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:14:48.274118 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:14:48.274132 | orchestrator | 2026-02-02 04:14:48.274224 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-02 04:14:48.274241 | orchestrator | Monday 02 February 2026 04:14:28 +0000 (0:00:02.726) 0:02:00.187 ******* 2026-02-02 04:14:48.274256 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:14:48.274270 | orchestrator | 2026-02-02 04:14:48.274284 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-02 04:14:48.274299 | orchestrator | Monday 02 February 2026 04:14:29 +0000 (0:00:00.764) 0:02:00.952 ******* 2026-02-02 04:14:48.274314 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:14:48.274328 | orchestrator | 2026-02-02 04:14:48.274343 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-02 04:14:48.274358 | orchestrator | Monday 02 February 2026 04:14:32 +0000 (0:00:03.223) 0:02:04.176 ******* 2026-02-02 04:14:48.274374 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:14:48.274388 | orchestrator | 2026-02-02 04:14:48.274402 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-02 04:14:48.274419 | orchestrator | Monday 02 February 2026 04:14:35 +0000 (0:00:02.937) 0:02:07.113 ******* 2026-02-02 04:14:48.274434 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-02 04:14:48.274449 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-02 04:14:48.274462 | orchestrator | 2026-02-02 04:14:48.274476 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-02 04:14:48.274491 | orchestrator | Monday 02 February 2026 04:14:41 +0000 (0:00:06.300) 0:02:13.414 ******* 2026-02-02 04:14:48.274506 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:14:48.274521 | orchestrator | 2026-02-02 04:14:48.274535 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-02 04:14:48.274548 | orchestrator | Monday 02 February 2026 04:14:45 +0000 (0:00:04.248) 0:02:17.662 ******* 2026-02-02 04:14:48.274564 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:14:48.274579 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:14:48.274593 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:14:48.274608 | orchestrator | 2026-02-02 04:14:48.274622 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-02 04:14:48.274637 | orchestrator | Monday 02 February 2026 04:14:46 +0000 (0:00:00.310) 0:02:17.973 ******* 2026-02-02 04:14:48.274676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:14:48.274719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:14:48.274751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:14:48.274767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:14:48.274785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:14:48.274806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:14:48.274823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:14:48.274841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:14:48.274876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:14:49.717472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:14:49.717578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:14:49.717594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:14:49.717623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:14:49.717636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:14:49.717668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:14:49.717681 | orchestrator | 2026-02-02 04:14:49.717695 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-02 04:14:49.717708 | orchestrator | Monday 02 February 2026 04:14:48 +0000 (0:00:02.548) 0:02:20.522 ******* 2026-02-02 04:14:49.717720 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:14:49.717732 | orchestrator | 2026-02-02 04:14:49.717744 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-02 04:14:49.717755 | orchestrator | Monday 02 February 2026 04:14:48 +0000 (0:00:00.152) 0:02:20.675 ******* 2026-02-02 04:14:49.717766 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:14:49.717795 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:14:49.717807 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:14:49.717818 | orchestrator | 2026-02-02 04:14:49.717829 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-02 04:14:49.717840 | orchestrator | Monday 02 February 2026 04:14:49 +0000 (0:00:00.296) 0:02:20.971 ******* 2026-02-02 04:14:49.717852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 04:14:49.717866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 04:14:49.717884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 04:14:49.717896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 04:14:49.717915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:14:49.717943 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:14:49.717964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 04:14:54.319812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 04:14:54.319920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 04:14:54.319958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 04:14:54.319974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:14:54.320008 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:14:54.320023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 04:14:54.320035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 04:14:54.320065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 04:14:54.320076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 04:14:54.320092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:14:54.320112 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:14:54.320123 | orchestrator | 2026-02-02 04:14:54.320204 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-02 04:14:54.320220 | orchestrator | Monday 02 February 2026 04:14:49 +0000 (0:00:00.633) 0:02:21.605 ******* 2026-02-02 04:14:54.320231 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:14:54.320242 | orchestrator | 2026-02-02 04:14:54.320252 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-02 04:14:54.320263 | orchestrator | Monday 02 February 2026 04:14:50 +0000 (0:00:00.719) 0:02:22.324 ******* 2026-02-02 04:14:54.320271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:14:54.320279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:14:54.320341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:14:55.820482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:14:55.820628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:14:55.820645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:14:55.820657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:14:55.820670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:14:55.820682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:14:55.820711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:14:55.820723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:14:55.820747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:14:55.820759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:14:55.820771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:14:55.820783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:14:55.820795 | orchestrator | 2026-02-02 04:14:55.820809 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-02 04:14:55.820822 | orchestrator | Monday 02 February 2026 04:14:55 +0000 (0:00:04.768) 0:02:27.092 ******* 2026-02-02 04:14:55.820843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 04:14:55.924890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 04:14:55.925024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 04:14:55.925042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 04:14:55.925055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:14:55.925068 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:14:55.925082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 04:14:55.925095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 04:14:55.925143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 04:14:55.925212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 04:14:55.925226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:14:55.925238 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:14:55.925249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 04:14:55.925261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 04:14:55.925272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 04:14:55.925300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 04:14:56.464988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:14:56.465122 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:14:56.465144 | orchestrator | 2026-02-02 04:14:56.465209 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-02 04:14:56.465226 | orchestrator | Monday 02 February 2026 04:14:55 +0000 (0:00:00.632) 0:02:27.724 ******* 2026-02-02 04:14:56.465240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 04:14:56.465254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 04:14:56.465267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 04:14:56.465280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 04:14:56.465333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:14:56.465346 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:14:56.465366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 04:14:56.465379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 04:14:56.465391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 04:14:56.465402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 04:14:56.465420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:14:56.465432 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:14:56.465450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 04:15:00.988046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 04:15:00.988127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 04:15:00.988138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 04:15:00.988146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 04:15:00.988210 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:15:00.988220 | orchestrator | 2026-02-02 04:15:00.988227 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-02 04:15:00.988234 | orchestrator | Monday 02 February 2026 04:14:56 +0000 (0:00:01.063) 0:02:28.787 ******* 2026-02-02 04:15:00.988241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:15:00.988263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:15:00.988270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:15:00.988276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:15:00.988282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:15:00.988294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:15:00.988300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:00.988312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:16.340072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:16.340196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:16.340211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:16.340238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:16.340245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:15:16.340252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:15:16.340287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:15:16.340295 | orchestrator | 2026-02-02 04:15:16.340303 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-02 04:15:16.340312 | orchestrator | Monday 02 February 2026 04:15:01 +0000 (0:00:04.901) 0:02:33.689 ******* 2026-02-02 04:15:16.340319 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-02 04:15:16.340327 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-02 04:15:16.340333 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-02 04:15:16.340339 | orchestrator | 2026-02-02 04:15:16.340345 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-02 04:15:16.340351 | orchestrator | Monday 02 February 2026 04:15:03 +0000 (0:00:01.560) 0:02:35.249 ******* 2026-02-02 04:15:16.340358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:15:16.340374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:15:16.340381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:15:16.340399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:15:31.285987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:15:31.286193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:15:31.286216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:31.286256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:31.286268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:31.286281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:31.286326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:31.286339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:31.286351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:15:31.286372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:15:31.286384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:15:31.286396 | orchestrator | 2026-02-02 04:15:31.286409 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-02 04:15:31.286422 | orchestrator | Monday 02 February 2026 04:15:19 +0000 (0:00:16.101) 0:02:51.350 ******* 2026-02-02 04:15:31.286433 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:15:31.286446 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:15:31.286457 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:15:31.286468 | orchestrator | 2026-02-02 04:15:31.286479 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-02 04:15:31.286490 | orchestrator | Monday 02 February 2026 04:15:21 +0000 (0:00:01.916) 0:02:53.267 ******* 2026-02-02 04:15:31.286502 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-02 04:15:31.286513 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-02 04:15:31.286524 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-02 04:15:31.286534 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-02 04:15:31.286545 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-02 04:15:31.286556 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-02 04:15:31.286567 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-02 04:15:31.286578 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-02 04:15:31.286589 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-02 04:15:31.286600 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-02 04:15:31.286611 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-02 04:15:31.286621 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-02 04:15:31.286632 | orchestrator | 2026-02-02 04:15:31.286644 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-02 04:15:31.286660 | orchestrator | Monday 02 February 2026 04:15:26 +0000 (0:00:04.843) 0:02:58.110 ******* 2026-02-02 04:15:31.286671 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-02 04:15:31.286682 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-02 04:15:31.286700 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-02 04:15:39.347050 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-02 04:15:39.347204 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-02 04:15:39.347223 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-02 04:15:39.347236 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-02 04:15:39.347247 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-02 04:15:39.347258 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-02 04:15:39.347270 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-02 04:15:39.347281 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-02 04:15:39.347293 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-02 04:15:39.347304 | orchestrator | 2026-02-02 04:15:39.347316 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-02 04:15:39.347329 | orchestrator | Monday 02 February 2026 04:15:31 +0000 (0:00:04.968) 0:03:03.079 ******* 2026-02-02 04:15:39.347340 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-02 04:15:39.347351 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-02 04:15:39.347362 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-02 04:15:39.347373 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-02 04:15:39.347384 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-02 04:15:39.347395 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-02 04:15:39.347406 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-02 04:15:39.347417 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-02 04:15:39.347428 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-02 04:15:39.347439 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-02 04:15:39.347450 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-02 04:15:39.347461 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-02 04:15:39.347472 | orchestrator | 2026-02-02 04:15:39.347483 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-02 04:15:39.347494 | orchestrator | Monday 02 February 2026 04:15:36 +0000 (0:00:05.143) 0:03:08.223 ******* 2026-02-02 04:15:39.347509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:15:39.347524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:15:39.347605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 04:15:39.347622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:15:39.347636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:15:39.347649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-02 04:15:39.347663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:39.347677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:39.347705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-02 04:15:39.347727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:16:58.545677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:16:58.545779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-02 04:16:58.545795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:16:58.545809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:16:58.545847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-02 04:16:58.545860 | orchestrator | 2026-02-02 04:16:58.545874 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-02 04:16:58.545883 | orchestrator | Monday 02 February 2026 04:15:39 +0000 (0:00:03.529) 0:03:11.752 ******* 2026-02-02 04:16:58.545890 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:16:58.545898 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:16:58.545904 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:16:58.545911 | orchestrator | 2026-02-02 04:16:58.545931 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-02 04:16:58.545938 | orchestrator | Monday 02 February 2026 04:15:40 +0000 (0:00:00.507) 0:03:12.260 ******* 2026-02-02 04:16:58.545945 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:16:58.545951 | orchestrator | 2026-02-02 04:16:58.545958 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-02 04:16:58.545965 | orchestrator | Monday 02 February 2026 04:15:42 +0000 (0:00:01.989) 0:03:14.249 ******* 2026-02-02 04:16:58.545972 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:16:58.545979 | orchestrator | 2026-02-02 04:16:58.545985 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-02 04:16:58.545993 | orchestrator | Monday 02 February 2026 04:15:44 +0000 (0:00:02.098) 0:03:16.348 ******* 2026-02-02 04:16:58.546003 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:16:58.546078 | orchestrator | 2026-02-02 04:16:58.546091 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-02 04:16:58.546104 | orchestrator | Monday 02 February 2026 04:15:46 +0000 (0:00:02.103) 0:03:18.451 ******* 2026-02-02 04:16:58.546133 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:16:58.546145 | orchestrator | 2026-02-02 04:16:58.546155 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-02 04:16:58.546184 | orchestrator | Monday 02 February 2026 04:15:48 +0000 (0:00:02.128) 0:03:20.579 ******* 2026-02-02 04:16:58.546196 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:16:58.546208 | orchestrator | 2026-02-02 04:16:58.546219 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-02 04:16:58.546230 | orchestrator | Monday 02 February 2026 04:16:08 +0000 (0:00:20.076) 0:03:40.656 ******* 2026-02-02 04:16:58.546240 | orchestrator | 2026-02-02 04:16:58.546252 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-02 04:16:58.546264 | orchestrator | Monday 02 February 2026 04:16:08 +0000 (0:00:00.080) 0:03:40.736 ******* 2026-02-02 04:16:58.546275 | orchestrator | 2026-02-02 04:16:58.546287 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-02 04:16:58.546300 | orchestrator | Monday 02 February 2026 04:16:08 +0000 (0:00:00.071) 0:03:40.808 ******* 2026-02-02 04:16:58.546313 | orchestrator | 2026-02-02 04:16:58.546325 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-02 04:16:58.546338 | orchestrator | Monday 02 February 2026 04:16:09 +0000 (0:00:00.076) 0:03:40.884 ******* 2026-02-02 04:16:58.546350 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:16:58.546362 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:16:58.546375 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:16:58.546388 | orchestrator | 2026-02-02 04:16:58.546397 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-02 04:16:58.546405 | orchestrator | Monday 02 February 2026 04:16:24 +0000 (0:00:15.918) 0:03:56.802 ******* 2026-02-02 04:16:58.546423 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:16:58.546431 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:16:58.546439 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:16:58.546446 | orchestrator | 2026-02-02 04:16:58.546454 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-02 04:16:58.546461 | orchestrator | Monday 02 February 2026 04:16:36 +0000 (0:00:11.526) 0:04:08.329 ******* 2026-02-02 04:16:58.546469 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:16:58.546476 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:16:58.546484 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:16:58.546492 | orchestrator | 2026-02-02 04:16:58.546500 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-02 04:16:58.546508 | orchestrator | Monday 02 February 2026 04:16:41 +0000 (0:00:05.350) 0:04:13.679 ******* 2026-02-02 04:16:58.546516 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:16:58.546528 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:16:58.546539 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:16:58.546550 | orchestrator | 2026-02-02 04:16:58.546561 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-02 04:16:58.546572 | orchestrator | Monday 02 February 2026 04:16:50 +0000 (0:00:08.194) 0:04:21.874 ******* 2026-02-02 04:16:58.546583 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:16:58.546595 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:16:58.546606 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:16:58.546617 | orchestrator | 2026-02-02 04:16:58.546628 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:16:58.546640 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 04:16:58.546654 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 04:16:58.546666 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 04:16:58.546677 | orchestrator | 2026-02-02 04:16:58.546688 | orchestrator | 2026-02-02 04:16:58.546700 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:16:58.546711 | orchestrator | Monday 02 February 2026 04:16:58 +0000 (0:00:08.450) 0:04:30.325 ******* 2026-02-02 04:16:58.546722 | orchestrator | =============================================================================== 2026-02-02 04:16:58.546733 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.08s 2026-02-02 04:16:58.546744 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.10s 2026-02-02 04:16:58.546753 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.92s 2026-02-02 04:16:58.546760 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.29s 2026-02-02 04:16:58.546767 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.72s 2026-02-02 04:16:58.546780 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.53s 2026-02-02 04:16:58.546786 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.00s 2026-02-02 04:16:58.546793 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.45s 2026-02-02 04:16:58.546800 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.19s 2026-02-02 04:16:58.546806 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.90s 2026-02-02 04:16:58.546813 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.96s 2026-02-02 04:16:58.546820 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.30s 2026-02-02 04:16:58.546826 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.96s 2026-02-02 04:16:58.546839 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.35s 2026-02-02 04:16:58.546858 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.18s 2026-02-02 04:16:58.890628 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.14s 2026-02-02 04:16:58.890714 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 4.98s 2026-02-02 04:16:58.890724 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 4.97s 2026-02-02 04:16:58.890732 | orchestrator | octavia : Copying over config.json files for services ------------------- 4.90s 2026-02-02 04:16:58.890740 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 4.84s 2026-02-02 04:17:01.211594 | orchestrator | 2026-02-02 04:17:01 | INFO  | Task 803f7fb5-95cb-46b7-b922-7cf9cb0e33df (ceilometer) was prepared for execution. 2026-02-02 04:17:01.211695 | orchestrator | 2026-02-02 04:17:01 | INFO  | It takes a moment until task 803f7fb5-95cb-46b7-b922-7cf9cb0e33df (ceilometer) has been started and output is visible here. 2026-02-02 04:17:22.966489 | orchestrator | 2026-02-02 04:17:22.966607 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:17:22.966624 | orchestrator | 2026-02-02 04:17:22.966637 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:17:22.966648 | orchestrator | Monday 02 February 2026 04:17:05 +0000 (0:00:00.276) 0:00:00.276 ******* 2026-02-02 04:17:22.966660 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:17:22.966672 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:17:22.966684 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:17:22.966695 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:17:22.966705 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:17:22.966716 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:17:22.966727 | orchestrator | 2026-02-02 04:17:22.966739 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:17:22.966750 | orchestrator | Monday 02 February 2026 04:17:06 +0000 (0:00:00.702) 0:00:00.979 ******* 2026-02-02 04:17:22.966761 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-02 04:17:22.966773 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-02 04:17:22.966784 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-02 04:17:22.966795 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-02 04:17:22.966806 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-02 04:17:22.966817 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-02 04:17:22.966828 | orchestrator | 2026-02-02 04:17:22.966839 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-02 04:17:22.966850 | orchestrator | 2026-02-02 04:17:22.966861 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-02 04:17:22.966872 | orchestrator | Monday 02 February 2026 04:17:06 +0000 (0:00:00.570) 0:00:01.550 ******* 2026-02-02 04:17:22.966884 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 04:17:22.966897 | orchestrator | 2026-02-02 04:17:22.966908 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-02 04:17:22.966919 | orchestrator | Monday 02 February 2026 04:17:07 +0000 (0:00:01.217) 0:00:02.767 ******* 2026-02-02 04:17:22.966930 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:22.966941 | orchestrator | 2026-02-02 04:17:22.966952 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-02 04:17:22.966963 | orchestrator | Monday 02 February 2026 04:17:07 +0000 (0:00:00.126) 0:00:02.893 ******* 2026-02-02 04:17:22.966974 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:22.966985 | orchestrator | 2026-02-02 04:17:22.966996 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-02 04:17:22.967033 | orchestrator | Monday 02 February 2026 04:17:08 +0000 (0:00:00.128) 0:00:03.022 ******* 2026-02-02 04:17:22.967047 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 04:17:22.967060 | orchestrator | 2026-02-02 04:17:22.967073 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-02 04:17:22.967086 | orchestrator | Monday 02 February 2026 04:17:11 +0000 (0:00:03.327) 0:00:06.349 ******* 2026-02-02 04:17:22.967099 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 04:17:22.967112 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-02 04:17:22.967123 | orchestrator | 2026-02-02 04:17:22.967134 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-02 04:17:22.967145 | orchestrator | Monday 02 February 2026 04:17:14 +0000 (0:00:03.439) 0:00:09.789 ******* 2026-02-02 04:17:22.967155 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 04:17:22.967207 | orchestrator | 2026-02-02 04:17:22.967220 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-02 04:17:22.967245 | orchestrator | Monday 02 February 2026 04:17:17 +0000 (0:00:02.814) 0:00:12.603 ******* 2026-02-02 04:17:22.967338 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-02 04:17:22.967350 | orchestrator | 2026-02-02 04:17:22.967361 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-02 04:17:22.967372 | orchestrator | Monday 02 February 2026 04:17:21 +0000 (0:00:03.831) 0:00:16.434 ******* 2026-02-02 04:17:22.967383 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:22.967394 | orchestrator | 2026-02-02 04:17:22.967405 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-02 04:17:22.967416 | orchestrator | Monday 02 February 2026 04:17:21 +0000 (0:00:00.108) 0:00:16.543 ******* 2026-02-02 04:17:22.967430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:22.967464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:22.967478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:22.967490 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:22.967515 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:22.967528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:17:22.967541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:17:22.967560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:17:27.634264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:27.634372 | orchestrator | 2026-02-02 04:17:27.634390 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-02 04:17:27.634428 | orchestrator | Monday 02 February 2026 04:17:22 +0000 (0:00:01.340) 0:00:17.883 ******* 2026-02-02 04:17:27.634440 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:17:27.634452 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-02 04:17:27.634463 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 04:17:27.634474 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-02 04:17:27.634485 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 04:17:27.634496 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 04:17:27.634507 | orchestrator | 2026-02-02 04:17:27.634518 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-02 04:17:27.634530 | orchestrator | Monday 02 February 2026 04:17:24 +0000 (0:00:01.637) 0:00:19.521 ******* 2026-02-02 04:17:27.634541 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:17:27.634552 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:17:27.634563 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:17:27.634574 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:17:27.634584 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:17:27.634595 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:17:27.634606 | orchestrator | 2026-02-02 04:17:27.634617 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-02 04:17:27.634628 | orchestrator | Monday 02 February 2026 04:17:25 +0000 (0:00:00.612) 0:00:20.134 ******* 2026-02-02 04:17:27.634639 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:27.634650 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:17:27.634660 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:17:27.634672 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:17:27.634683 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:17:27.634694 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:17:27.634704 | orchestrator | 2026-02-02 04:17:27.634716 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-02 04:17:27.634728 | orchestrator | Monday 02 February 2026 04:17:25 +0000 (0:00:00.778) 0:00:20.912 ******* 2026-02-02 04:17:27.634739 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:17:27.634749 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:17:27.634760 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:17:27.634771 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:17:27.634782 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:17:27.634831 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:17:27.634843 | orchestrator | 2026-02-02 04:17:27.634855 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-02 04:17:27.634866 | orchestrator | Monday 02 February 2026 04:17:26 +0000 (0:00:00.582) 0:00:21.494 ******* 2026-02-02 04:17:27.634883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:27.634897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:27.634916 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:27.634948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:27.634961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:27.634972 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:17:27.634984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:27.634996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:27.635014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:27.635026 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:17:27.635038 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:17:27.635049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:27.635068 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:17:27.635088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:32.146884 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:17:32.146997 | orchestrator | 2026-02-02 04:17:32.147015 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-02 04:17:32.147029 | orchestrator | Monday 02 February 2026 04:17:27 +0000 (0:00:01.060) 0:00:22.555 ******* 2026-02-02 04:17:32.147044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:32.147060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:32.147073 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:32.147101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:32.147115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:32.147145 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:17:32.147157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:32.147231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:32.147246 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:17:32.147277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:32.147290 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:17:32.147301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:32.147313 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:17:32.147330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:32.147342 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:17:32.147353 | orchestrator | 2026-02-02 04:17:32.147365 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-02 04:17:32.147388 | orchestrator | Monday 02 February 2026 04:17:28 +0000 (0:00:00.816) 0:00:23.372 ******* 2026-02-02 04:17:32.147399 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:17:32.147410 | orchestrator | 2026-02-02 04:17:32.147424 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-02 04:17:32.147439 | orchestrator | Monday 02 February 2026 04:17:29 +0000 (0:00:00.735) 0:00:24.108 ******* 2026-02-02 04:17:32.147451 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:17:32.147464 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:17:32.147477 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:17:32.147489 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:17:32.147502 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:17:32.147512 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:17:32.147523 | orchestrator | 2026-02-02 04:17:32.147534 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-02 04:17:32.147545 | orchestrator | Monday 02 February 2026 04:17:29 +0000 (0:00:00.771) 0:00:24.879 ******* 2026-02-02 04:17:32.147556 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:17:32.147567 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:17:32.147577 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:17:32.147588 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:17:32.147598 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:17:32.147609 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:17:32.147620 | orchestrator | 2026-02-02 04:17:32.147630 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-02 04:17:32.147641 | orchestrator | Monday 02 February 2026 04:17:30 +0000 (0:00:00.877) 0:00:25.757 ******* 2026-02-02 04:17:32.147652 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:32.147663 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:17:32.147674 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:17:32.147684 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:17:32.147695 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:17:32.147706 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:17:32.147716 | orchestrator | 2026-02-02 04:17:32.147727 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-02 04:17:32.147738 | orchestrator | Monday 02 February 2026 04:17:31 +0000 (0:00:00.755) 0:00:26.512 ******* 2026-02-02 04:17:32.147749 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:32.147760 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:17:32.147771 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:17:32.147781 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:17:32.147792 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:17:32.147803 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:17:32.147814 | orchestrator | 2026-02-02 04:17:36.965827 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-02 04:17:36.965931 | orchestrator | Monday 02 February 2026 04:17:32 +0000 (0:00:00.558) 0:00:27.070 ******* 2026-02-02 04:17:36.965947 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:17:36.965960 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-02 04:17:36.965972 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-02 04:17:36.965983 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 04:17:36.965994 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 04:17:36.966005 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 04:17:36.966143 | orchestrator | 2026-02-02 04:17:36.966205 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-02 04:17:36.966223 | orchestrator | Monday 02 February 2026 04:17:33 +0000 (0:00:01.426) 0:00:28.497 ******* 2026-02-02 04:17:36.966239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:36.966279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:36.966292 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:36.966318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:36.966333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:36.966346 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:17:36.966359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:36.966394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:36.966407 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:17:36.966420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:36.966441 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:17:36.966455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:36.966467 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:17:36.966485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:36.966501 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:17:36.966518 | orchestrator | 2026-02-02 04:17:36.966538 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-02 04:17:36.966556 | orchestrator | Monday 02 February 2026 04:17:34 +0000 (0:00:00.785) 0:00:29.282 ******* 2026-02-02 04:17:36.966580 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:36.966606 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:17:36.966626 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:17:36.966643 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:17:36.966656 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:17:36.966669 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:17:36.966680 | orchestrator | 2026-02-02 04:17:36.966691 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-02 04:17:36.966702 | orchestrator | Monday 02 February 2026 04:17:35 +0000 (0:00:00.808) 0:00:30.091 ******* 2026-02-02 04:17:36.966712 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:17:36.966723 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-02 04:17:36.966734 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-02 04:17:36.966745 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 04:17:36.966755 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 04:17:36.966766 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 04:17:36.966777 | orchestrator | 2026-02-02 04:17:36.966788 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-02 04:17:36.966799 | orchestrator | Monday 02 February 2026 04:17:36 +0000 (0:00:01.283) 0:00:31.375 ******* 2026-02-02 04:17:36.966822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:42.516440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:42.516582 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:42.516612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:42.516664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:42.516686 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:17:42.516705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:42.516723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:42.516743 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:17:42.516762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:42.516815 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:17:42.516858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:42.516870 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:17:42.516882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:42.516893 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:17:42.516905 | orchestrator | 2026-02-02 04:17:42.516917 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-02 04:17:42.516929 | orchestrator | Monday 02 February 2026 04:17:37 +0000 (0:00:01.017) 0:00:32.392 ******* 2026-02-02 04:17:42.516940 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:42.516951 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:17:42.516962 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:17:42.516974 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:17:42.516987 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:17:42.517005 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:17:42.517018 | orchestrator | 2026-02-02 04:17:42.517031 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-02 04:17:42.517044 | orchestrator | Monday 02 February 2026 04:17:38 +0000 (0:00:00.752) 0:00:33.145 ******* 2026-02-02 04:17:42.517062 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:42.517080 | orchestrator | 2026-02-02 04:17:42.517099 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-02 04:17:42.517117 | orchestrator | Monday 02 February 2026 04:17:38 +0000 (0:00:00.139) 0:00:33.284 ******* 2026-02-02 04:17:42.517135 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:42.517155 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:17:42.517219 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:17:42.517240 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:17:42.517259 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:17:42.517275 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:17:42.517288 | orchestrator | 2026-02-02 04:17:42.517302 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-02 04:17:42.517314 | orchestrator | Monday 02 February 2026 04:17:38 +0000 (0:00:00.584) 0:00:33.869 ******* 2026-02-02 04:17:42.517340 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 04:17:42.517354 | orchestrator | 2026-02-02 04:17:42.517365 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-02 04:17:42.517376 | orchestrator | Monday 02 February 2026 04:17:40 +0000 (0:00:01.351) 0:00:35.220 ******* 2026-02-02 04:17:42.517388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:42.517411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:42.996267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:42.996373 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:42.996406 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:42.996420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:42.996453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:17:42.996466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:17:42.996495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:17:42.996508 | orchestrator | 2026-02-02 04:17:42.996522 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-02 04:17:42.996534 | orchestrator | Monday 02 February 2026 04:17:42 +0000 (0:00:02.211) 0:00:37.432 ******* 2026-02-02 04:17:42.996547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:42.996565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:42.996584 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:42.996597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:42.996609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:42.996621 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:17:42.996632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:42.996652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:44.804760 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:17:44.804865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:44.804891 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:17:44.804927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:44.804962 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:17:44.804972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:44.804981 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:17:44.804991 | orchestrator | 2026-02-02 04:17:44.805001 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-02 04:17:44.805011 | orchestrator | Monday 02 February 2026 04:17:43 +0000 (0:00:00.805) 0:00:38.237 ******* 2026-02-02 04:17:44.805021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:44.805032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:44.805058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:44.805068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:44.805088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:17:44.805098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:17:44.805107 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:17:44.805116 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:17:44.805125 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:17:44.805134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:44.805144 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:17:44.805153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:44.805162 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:17:44.805259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:17:52.091756 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:17:52.091881 | orchestrator | 2026-02-02 04:17:52.091922 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-02 04:17:52.091936 | orchestrator | Monday 02 February 2026 04:17:44 +0000 (0:00:01.483) 0:00:39.720 ******* 2026-02-02 04:17:52.091965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:52.091979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:52.091991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:52.092004 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:52.092017 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:52.092047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:52.092073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:17:52.092086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:17:52.092098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:17:52.092109 | orchestrator | 2026-02-02 04:17:52.092121 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-02 04:17:52.092132 | orchestrator | Monday 02 February 2026 04:17:47 +0000 (0:00:02.469) 0:00:42.190 ******* 2026-02-02 04:17:52.092144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:52.092155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:17:52.092198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:01.139257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:01.139368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:01.139383 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:01.139395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:18:01.139406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:18:01.139417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:18:01.139450 | orchestrator | 2026-02-02 04:18:01.139486 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-02 04:18:01.139515 | orchestrator | Monday 02 February 2026 04:17:52 +0000 (0:00:04.814) 0:00:47.005 ******* 2026-02-02 04:18:01.139526 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:18:01.139537 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-02 04:18:01.139547 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-02 04:18:01.139557 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 04:18:01.139567 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 04:18:01.139576 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 04:18:01.139586 | orchestrator | 2026-02-02 04:18:01.139596 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-02 04:18:01.139606 | orchestrator | Monday 02 February 2026 04:17:53 +0000 (0:00:01.448) 0:00:48.454 ******* 2026-02-02 04:18:01.139616 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:18:01.139625 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:18:01.139635 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:18:01.139644 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:18:01.139654 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:18:01.139669 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:18:01.139679 | orchestrator | 2026-02-02 04:18:01.139689 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-02 04:18:01.139700 | orchestrator | Monday 02 February 2026 04:17:54 +0000 (0:00:00.600) 0:00:49.054 ******* 2026-02-02 04:18:01.139709 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:18:01.139719 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:18:01.139729 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:18:01.139738 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:18:01.139748 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:18:01.139760 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:18:01.139771 | orchestrator | 2026-02-02 04:18:01.139783 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-02 04:18:01.139794 | orchestrator | Monday 02 February 2026 04:17:55 +0000 (0:00:01.563) 0:00:50.618 ******* 2026-02-02 04:18:01.139805 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:18:01.139817 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:18:01.139828 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:18:01.139840 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:18:01.139852 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:18:01.139863 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:18:01.139875 | orchestrator | 2026-02-02 04:18:01.139886 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-02 04:18:01.139898 | orchestrator | Monday 02 February 2026 04:17:57 +0000 (0:00:01.405) 0:00:52.023 ******* 2026-02-02 04:18:01.139910 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:18:01.139926 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-02 04:18:01.139943 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-02 04:18:01.139959 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 04:18:01.139975 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 04:18:01.139992 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 04:18:01.140007 | orchestrator | 2026-02-02 04:18:01.140022 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-02 04:18:01.140038 | orchestrator | Monday 02 February 2026 04:17:58 +0000 (0:00:01.554) 0:00:53.578 ******* 2026-02-02 04:18:01.140067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:01.140087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:01.140104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:01.140140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:01.918834 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:01.918938 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:01.918980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:18:01.918995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:18:01.919008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:18:01.919020 | orchestrator | 2026-02-02 04:18:01.919033 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-02 04:18:01.919046 | orchestrator | Monday 02 February 2026 04:18:01 +0000 (0:00:02.476) 0:00:56.054 ******* 2026-02-02 04:18:01.919058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:18:01.919103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:18:01.919118 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:18:01.919130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:18:01.919149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:18:01.919161 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:18:01.919219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:18:01.919232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:18:01.919243 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:18:01.919254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:18:01.919265 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:18:01.919290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:18:05.249419 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:18:05.249491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:18:05.249499 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:18:05.249504 | orchestrator | 2026-02-02 04:18:05.249509 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-02 04:18:05.249514 | orchestrator | Monday 02 February 2026 04:18:01 +0000 (0:00:00.789) 0:00:56.844 ******* 2026-02-02 04:18:05.249518 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:18:05.249522 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:18:05.249525 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:18:05.249529 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:18:05.249533 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:18:05.249537 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:18:05.249540 | orchestrator | 2026-02-02 04:18:05.249544 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-02 04:18:05.249548 | orchestrator | Monday 02 February 2026 04:18:02 +0000 (0:00:00.780) 0:00:57.624 ******* 2026-02-02 04:18:05.249553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:18:05.249560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:18:05.249565 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:18:05.249569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:18:05.249586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:18:05.249603 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:18:05.249618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-02 04:18:05.249622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 04:18:05.249626 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:18:05.249630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:18:05.249634 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:18:05.249638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:18:05.249642 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:18:05.249646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-02 04:18:05.249653 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:18:05.249657 | orchestrator | 2026-02-02 04:18:05.249664 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-02 04:18:05.249668 | orchestrator | Monday 02 February 2026 04:18:03 +0000 (0:00:00.841) 0:00:58.466 ******* 2026-02-02 04:18:05.249676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:34.094750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:34.094870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:34.094887 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:34.094901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:34.094929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-02 04:18:34.094966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:18:34.094997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:18:34.095010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-02 04:18:34.095022 | orchestrator | 2026-02-02 04:18:34.095035 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-02 04:18:34.095048 | orchestrator | Monday 02 February 2026 04:18:05 +0000 (0:00:01.687) 0:01:00.154 ******* 2026-02-02 04:18:34.095060 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:18:34.095072 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:18:34.095083 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:18:34.095094 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:18:34.095105 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:18:34.095115 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:18:34.095126 | orchestrator | 2026-02-02 04:18:34.095137 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-02 04:18:34.095149 | orchestrator | Monday 02 February 2026 04:18:05 +0000 (0:00:00.589) 0:01:00.743 ******* 2026-02-02 04:18:34.095160 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:18:34.095203 | orchestrator | 2026-02-02 04:18:34.095215 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-02 04:18:34.095226 | orchestrator | Monday 02 February 2026 04:18:10 +0000 (0:00:04.640) 0:01:05.384 ******* 2026-02-02 04:18:34.095237 | orchestrator | 2026-02-02 04:18:34.095248 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-02 04:18:34.095260 | orchestrator | Monday 02 February 2026 04:18:10 +0000 (0:00:00.077) 0:01:05.462 ******* 2026-02-02 04:18:34.095270 | orchestrator | 2026-02-02 04:18:34.095283 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-02 04:18:34.095303 | orchestrator | Monday 02 February 2026 04:18:10 +0000 (0:00:00.080) 0:01:05.542 ******* 2026-02-02 04:18:34.095316 | orchestrator | 2026-02-02 04:18:34.095329 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-02 04:18:34.095343 | orchestrator | Monday 02 February 2026 04:18:10 +0000 (0:00:00.250) 0:01:05.793 ******* 2026-02-02 04:18:34.095356 | orchestrator | 2026-02-02 04:18:34.095369 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-02 04:18:34.095382 | orchestrator | Monday 02 February 2026 04:18:10 +0000 (0:00:00.073) 0:01:05.867 ******* 2026-02-02 04:18:34.095394 | orchestrator | 2026-02-02 04:18:34.095407 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-02 04:18:34.095420 | orchestrator | Monday 02 February 2026 04:18:11 +0000 (0:00:00.067) 0:01:05.934 ******* 2026-02-02 04:18:34.095433 | orchestrator | 2026-02-02 04:18:34.095446 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-02 04:18:34.095460 | orchestrator | Monday 02 February 2026 04:18:11 +0000 (0:00:00.072) 0:01:06.007 ******* 2026-02-02 04:18:34.095473 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:18:34.095485 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:18:34.095498 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:18:34.095510 | orchestrator | 2026-02-02 04:18:34.095523 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-02 04:18:34.095536 | orchestrator | Monday 02 February 2026 04:18:18 +0000 (0:00:07.437) 0:01:13.444 ******* 2026-02-02 04:18:34.095549 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:18:34.095562 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:18:34.095580 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:18:34.095593 | orchestrator | 2026-02-02 04:18:34.095606 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-02 04:18:34.095619 | orchestrator | Monday 02 February 2026 04:18:27 +0000 (0:00:09.373) 0:01:22.818 ******* 2026-02-02 04:18:34.095632 | orchestrator | changed: [testbed-node-3] 2026-02-02 04:18:34.095644 | orchestrator | changed: [testbed-node-4] 2026-02-02 04:18:34.095655 | orchestrator | changed: [testbed-node-5] 2026-02-02 04:18:34.095666 | orchestrator | 2026-02-02 04:18:34.095677 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:18:34.095688 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-02 04:18:34.095701 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-02 04:18:34.095720 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-02 04:18:34.564713 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-02 04:18:34.564815 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-02 04:18:34.564831 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-02 04:18:34.564845 | orchestrator | 2026-02-02 04:18:34.564858 | orchestrator | 2026-02-02 04:18:34.564869 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:18:34.564882 | orchestrator | Monday 02 February 2026 04:18:34 +0000 (0:00:06.192) 0:01:29.011 ******* 2026-02-02 04:18:34.564893 | orchestrator | =============================================================================== 2026-02-02 04:18:34.564905 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.37s 2026-02-02 04:18:34.564940 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 7.44s 2026-02-02 04:18:34.564952 | orchestrator | ceilometer : Restart ceilometer-compute container ----------------------- 6.19s 2026-02-02 04:18:34.564963 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.81s 2026-02-02 04:18:34.564974 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.64s 2026-02-02 04:18:34.564985 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.83s 2026-02-02 04:18:34.564996 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.44s 2026-02-02 04:18:34.565007 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.33s 2026-02-02 04:18:34.565018 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 2.81s 2026-02-02 04:18:34.565029 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.48s 2026-02-02 04:18:34.565040 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.47s 2026-02-02 04:18:34.565052 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.21s 2026-02-02 04:18:34.565063 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.69s 2026-02-02 04:18:34.565074 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.64s 2026-02-02 04:18:34.565085 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.56s 2026-02-02 04:18:34.565097 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.55s 2026-02-02 04:18:34.565108 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.48s 2026-02-02 04:18:34.565119 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.45s 2026-02-02 04:18:34.565130 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.43s 2026-02-02 04:18:34.565141 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.41s 2026-02-02 04:18:36.964655 | orchestrator | 2026-02-02 04:18:36 | INFO  | Task ab7c1c67-33bf-4da9-b047-dc770c371e83 (aodh) was prepared for execution. 2026-02-02 04:18:36.964738 | orchestrator | 2026-02-02 04:18:36 | INFO  | It takes a moment until task ab7c1c67-33bf-4da9-b047-dc770c371e83 (aodh) has been started and output is visible here. 2026-02-02 04:19:07.198646 | orchestrator | 2026-02-02 04:19:07.198764 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:19:07.198782 | orchestrator | 2026-02-02 04:19:07.198794 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:19:07.198806 | orchestrator | Monday 02 February 2026 04:18:41 +0000 (0:00:00.264) 0:00:00.264 ******* 2026-02-02 04:19:07.198818 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:19:07.198829 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:19:07.198840 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:19:07.198851 | orchestrator | 2026-02-02 04:19:07.198862 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:19:07.198873 | orchestrator | Monday 02 February 2026 04:18:41 +0000 (0:00:00.295) 0:00:00.559 ******* 2026-02-02 04:19:07.198884 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-02 04:19:07.198911 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-02 04:19:07.198923 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-02 04:19:07.198934 | orchestrator | 2026-02-02 04:19:07.198945 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-02 04:19:07.198956 | orchestrator | 2026-02-02 04:19:07.198966 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-02 04:19:07.198977 | orchestrator | Monday 02 February 2026 04:18:41 +0000 (0:00:00.432) 0:00:00.991 ******* 2026-02-02 04:19:07.198989 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:19:07.199001 | orchestrator | 2026-02-02 04:19:07.199012 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-02 04:19:07.199044 | orchestrator | Monday 02 February 2026 04:18:42 +0000 (0:00:00.569) 0:00:01.561 ******* 2026-02-02 04:19:07.199056 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-02 04:19:07.199067 | orchestrator | 2026-02-02 04:19:07.199079 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-02 04:19:07.199090 | orchestrator | Monday 02 February 2026 04:18:45 +0000 (0:00:03.171) 0:00:04.732 ******* 2026-02-02 04:19:07.199101 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-02 04:19:07.199112 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-02 04:19:07.199123 | orchestrator | 2026-02-02 04:19:07.199133 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-02 04:19:07.199144 | orchestrator | Monday 02 February 2026 04:18:51 +0000 (0:00:06.116) 0:00:10.848 ******* 2026-02-02 04:19:07.199155 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 04:19:07.199166 | orchestrator | 2026-02-02 04:19:07.199264 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-02 04:19:07.199279 | orchestrator | Monday 02 February 2026 04:18:54 +0000 (0:00:03.236) 0:00:14.085 ******* 2026-02-02 04:19:07.199293 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 04:19:07.199306 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-02 04:19:07.199319 | orchestrator | 2026-02-02 04:19:07.199332 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-02 04:19:07.199345 | orchestrator | Monday 02 February 2026 04:18:58 +0000 (0:00:03.839) 0:00:17.924 ******* 2026-02-02 04:19:07.199358 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 04:19:07.199370 | orchestrator | 2026-02-02 04:19:07.199383 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-02 04:19:07.199396 | orchestrator | Monday 02 February 2026 04:19:01 +0000 (0:00:02.988) 0:00:20.913 ******* 2026-02-02 04:19:07.199409 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-02 04:19:07.199421 | orchestrator | 2026-02-02 04:19:07.199434 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-02 04:19:07.199447 | orchestrator | Monday 02 February 2026 04:19:05 +0000 (0:00:03.445) 0:00:24.358 ******* 2026-02-02 04:19:07.199508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:07.199550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:07.199582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:07.199595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:07.199609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:07.199632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:07.199644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:07.199664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:08.466116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:08.466292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:08.466313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:08.466327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:08.466340 | orchestrator | 2026-02-02 04:19:08.466354 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-02 04:19:08.466367 | orchestrator | Monday 02 February 2026 04:19:07 +0000 (0:00:01.908) 0:00:26.267 ******* 2026-02-02 04:19:08.466379 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:19:08.466391 | orchestrator | 2026-02-02 04:19:08.466402 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-02 04:19:08.466413 | orchestrator | Monday 02 February 2026 04:19:07 +0000 (0:00:00.140) 0:00:26.408 ******* 2026-02-02 04:19:08.466424 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:19:08.466435 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:19:08.466446 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:19:08.466457 | orchestrator | 2026-02-02 04:19:08.466468 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-02 04:19:08.466479 | orchestrator | Monday 02 February 2026 04:19:07 +0000 (0:00:00.515) 0:00:26.924 ******* 2026-02-02 04:19:08.466492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-02 04:19:08.466550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 04:19:08.466572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:19:08.466587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 04:19:08.466601 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:19:08.466615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-02 04:19:08.466629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 04:19:08.466643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:19:08.466675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 04:19:13.169086 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:19:13.169295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-02 04:19:13.169330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 04:19:13.169349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:19:13.169367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 04:19:13.169384 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:19:13.169401 | orchestrator | 2026-02-02 04:19:13.169418 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-02 04:19:13.169434 | orchestrator | Monday 02 February 2026 04:19:08 +0000 (0:00:00.611) 0:00:27.536 ******* 2026-02-02 04:19:13.169472 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:19:13.169489 | orchestrator | 2026-02-02 04:19:13.169501 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-02 04:19:13.169513 | orchestrator | Monday 02 February 2026 04:19:09 +0000 (0:00:00.729) 0:00:28.265 ******* 2026-02-02 04:19:13.169526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:13.169572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:13.169590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:13.169604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:13.169617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:13.169641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:13.169655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:13.169684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:13.775420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:13.775522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:13.775539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:13.775551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:13.775586 | orchestrator | 2026-02-02 04:19:13.775601 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-02 04:19:13.775613 | orchestrator | Monday 02 February 2026 04:19:13 +0000 (0:00:03.978) 0:00:32.244 ******* 2026-02-02 04:19:13.775627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-02 04:19:13.775654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 04:19:13.775685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:19:13.775726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 04:19:13.775739 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:19:13.775752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-02 04:19:13.775771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 04:19:13.775783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:19:13.775794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 04:19:13.775805 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:19:13.775832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-02 04:19:14.756802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 04:19:14.756893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:19:14.756927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 04:19:14.756938 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:19:14.756950 | orchestrator | 2026-02-02 04:19:14.756960 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-02 04:19:14.756970 | orchestrator | Monday 02 February 2026 04:19:13 +0000 (0:00:00.607) 0:00:32.852 ******* 2026-02-02 04:19:14.756980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-02 04:19:14.757003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 04:19:14.757013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:19:14.757037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 04:19:14.757047 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:19:14.757063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-02 04:19:14.757072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 04:19:14.757082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:19:14.757091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 04:19:14.757100 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:19:14.757119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-02 04:19:18.650785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 04:19:18.650916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 04:19:18.650933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 04:19:18.650946 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:19:18.650959 | orchestrator | 2026-02-02 04:19:18.650977 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-02 04:19:18.650998 | orchestrator | Monday 02 February 2026 04:19:14 +0000 (0:00:00.975) 0:00:33.827 ******* 2026-02-02 04:19:18.651018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:18.651070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:18.651116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:18.651155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:18.651212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:18.651236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:18.651256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:18.651275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:18.651289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:18.651321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:26.870228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:26.870333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:26.870348 | orchestrator | 2026-02-02 04:19:26.870360 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-02 04:19:26.870371 | orchestrator | Monday 02 February 2026 04:19:18 +0000 (0:00:03.895) 0:00:37.723 ******* 2026-02-02 04:19:26.870381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:26.870406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:26.870416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:26.870459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:26.870470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:26.870479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:26.870488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:26.870502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:26.870511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:26.870527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:26.870544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:31.800086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:31.800214 | orchestrator | 2026-02-02 04:19:31.800232 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-02 04:19:31.800242 | orchestrator | Monday 02 February 2026 04:19:26 +0000 (0:00:08.211) 0:00:45.934 ******* 2026-02-02 04:19:31.800251 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:19:31.800259 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:19:31.800268 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:19:31.800277 | orchestrator | 2026-02-02 04:19:31.800285 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-02 04:19:31.800293 | orchestrator | Monday 02 February 2026 04:19:28 +0000 (0:00:01.735) 0:00:47.669 ******* 2026-02-02 04:19:31.800303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:31.800331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:31.800361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-02 04:19:31.800386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:31.800395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:31.800403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-02 04:19:31.800412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:31.800424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:31.800439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:31.800447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:19:31.800462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:20:24.025541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-02 04:20:24.025634 | orchestrator | 2026-02-02 04:20:24.025643 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-02 04:20:24.025649 | orchestrator | Monday 02 February 2026 04:19:31 +0000 (0:00:03.202) 0:00:50.871 ******* 2026-02-02 04:20:24.025654 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:20:24.025660 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:20:24.025664 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:20:24.025668 | orchestrator | 2026-02-02 04:20:24.025673 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-02 04:20:24.025678 | orchestrator | Monday 02 February 2026 04:19:32 +0000 (0:00:00.287) 0:00:51.158 ******* 2026-02-02 04:20:24.025682 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:20:24.025699 | orchestrator | 2026-02-02 04:20:24.025703 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-02 04:20:24.025708 | orchestrator | Monday 02 February 2026 04:19:34 +0000 (0:00:02.024) 0:00:53.183 ******* 2026-02-02 04:20:24.025712 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:20:24.025730 | orchestrator | 2026-02-02 04:20:24.025735 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-02 04:20:24.025739 | orchestrator | Monday 02 February 2026 04:19:36 +0000 (0:00:02.066) 0:00:55.249 ******* 2026-02-02 04:20:24.025750 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:20:24.025755 | orchestrator | 2026-02-02 04:20:24.025759 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-02 04:20:24.025764 | orchestrator | Monday 02 February 2026 04:19:47 +0000 (0:00:11.570) 0:01:06.820 ******* 2026-02-02 04:20:24.025768 | orchestrator | 2026-02-02 04:20:24.025773 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-02 04:20:24.025777 | orchestrator | Monday 02 February 2026 04:19:47 +0000 (0:00:00.074) 0:01:06.895 ******* 2026-02-02 04:20:24.025781 | orchestrator | 2026-02-02 04:20:24.025786 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-02 04:20:24.025790 | orchestrator | Monday 02 February 2026 04:19:47 +0000 (0:00:00.087) 0:01:06.982 ******* 2026-02-02 04:20:24.025794 | orchestrator | 2026-02-02 04:20:24.025798 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-02 04:20:24.025803 | orchestrator | Monday 02 February 2026 04:19:48 +0000 (0:00:00.252) 0:01:07.234 ******* 2026-02-02 04:20:24.025808 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:20:24.025832 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:20:24.025843 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:20:24.025848 | orchestrator | 2026-02-02 04:20:24.025852 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-02 04:20:24.025856 | orchestrator | Monday 02 February 2026 04:19:53 +0000 (0:00:05.495) 0:01:12.730 ******* 2026-02-02 04:20:24.025861 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:20:24.025865 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:20:24.025870 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:20:24.025874 | orchestrator | 2026-02-02 04:20:24.025878 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-02 04:20:24.025883 | orchestrator | Monday 02 February 2026 04:20:03 +0000 (0:00:09.859) 0:01:22.590 ******* 2026-02-02 04:20:24.025887 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:20:24.025891 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:20:24.025896 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:20:24.025900 | orchestrator | 2026-02-02 04:20:24.025904 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-02 04:20:24.025909 | orchestrator | Monday 02 February 2026 04:20:13 +0000 (0:00:09.987) 0:01:32.577 ******* 2026-02-02 04:20:24.025913 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:20:24.025917 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:20:24.025921 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:20:24.025926 | orchestrator | 2026-02-02 04:20:24.025930 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:20:24.025935 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 04:20:24.025941 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 04:20:24.025946 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 04:20:24.025950 | orchestrator | 2026-02-02 04:20:24.025954 | orchestrator | 2026-02-02 04:20:24.025959 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:20:24.025963 | orchestrator | Monday 02 February 2026 04:20:23 +0000 (0:00:10.174) 0:01:42.752 ******* 2026-02-02 04:20:24.025967 | orchestrator | =============================================================================== 2026-02-02 04:20:24.025971 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 11.57s 2026-02-02 04:20:24.025976 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.17s 2026-02-02 04:20:24.025995 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 9.99s 2026-02-02 04:20:24.026000 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 9.86s 2026-02-02 04:20:24.026004 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.21s 2026-02-02 04:20:24.026009 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.12s 2026-02-02 04:20:24.026013 | orchestrator | aodh : Restart aodh-api container --------------------------------------- 5.50s 2026-02-02 04:20:24.026054 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 3.98s 2026-02-02 04:20:24.026059 | orchestrator | aodh : Copying over config.json files for services ---------------------- 3.90s 2026-02-02 04:20:24.026063 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.84s 2026-02-02 04:20:24.026068 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.45s 2026-02-02 04:20:24.026072 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.24s 2026-02-02 04:20:24.026076 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.20s 2026-02-02 04:20:24.026081 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.17s 2026-02-02 04:20:24.026085 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 2.99s 2026-02-02 04:20:24.026089 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.07s 2026-02-02 04:20:24.026094 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.02s 2026-02-02 04:20:24.026098 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 1.91s 2026-02-02 04:20:24.026103 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.74s 2026-02-02 04:20:24.026109 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 0.98s 2026-02-02 04:20:26.321456 | orchestrator | 2026-02-02 04:20:26 | INFO  | Task da8b7095-f35b-4cab-b4bb-b8a8169c235e (kolla-ceph-rgw) was prepared for execution. 2026-02-02 04:20:26.321546 | orchestrator | 2026-02-02 04:20:26 | INFO  | It takes a moment until task da8b7095-f35b-4cab-b4bb-b8a8169c235e (kolla-ceph-rgw) has been started and output is visible here. 2026-02-02 04:21:01.124039 | orchestrator | 2026-02-02 04:21:01.124156 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:21:01.124172 | orchestrator | 2026-02-02 04:21:01.124214 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:21:01.124228 | orchestrator | Monday 02 February 2026 04:20:30 +0000 (0:00:00.275) 0:00:00.275 ******* 2026-02-02 04:21:01.124239 | orchestrator | ok: [testbed-manager] 2026-02-02 04:21:01.124252 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:21:01.124264 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:21:01.124275 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:21:01.124286 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:21:01.124297 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:21:01.124308 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:21:01.124320 | orchestrator | 2026-02-02 04:21:01.124348 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:21:01.124359 | orchestrator | Monday 02 February 2026 04:20:31 +0000 (0:00:00.834) 0:00:01.110 ******* 2026-02-02 04:21:01.124371 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-02 04:21:01.124382 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-02 04:21:01.124394 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-02 04:21:01.124405 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-02 04:21:01.124416 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-02 04:21:01.124427 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-02 04:21:01.124438 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-02 04:21:01.124472 | orchestrator | 2026-02-02 04:21:01.124484 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-02 04:21:01.124495 | orchestrator | 2026-02-02 04:21:01.124506 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-02 04:21:01.124518 | orchestrator | Monday 02 February 2026 04:20:32 +0000 (0:00:00.690) 0:00:01.801 ******* 2026-02-02 04:21:01.124529 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 04:21:01.124542 | orchestrator | 2026-02-02 04:21:01.124553 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-02 04:21:01.124564 | orchestrator | Monday 02 February 2026 04:20:33 +0000 (0:00:01.534) 0:00:03.335 ******* 2026-02-02 04:21:01.124576 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-02 04:21:01.124587 | orchestrator | 2026-02-02 04:21:01.124598 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-02 04:21:01.124609 | orchestrator | Monday 02 February 2026 04:20:37 +0000 (0:00:03.697) 0:00:07.033 ******* 2026-02-02 04:21:01.124621 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-02 04:21:01.124633 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-02 04:21:01.124644 | orchestrator | 2026-02-02 04:21:01.124655 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-02 04:21:01.124666 | orchestrator | Monday 02 February 2026 04:20:43 +0000 (0:00:05.913) 0:00:12.947 ******* 2026-02-02 04:21:01.124677 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-02 04:21:01.124688 | orchestrator | 2026-02-02 04:21:01.124699 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-02 04:21:01.124710 | orchestrator | Monday 02 February 2026 04:20:46 +0000 (0:00:03.029) 0:00:15.976 ******* 2026-02-02 04:21:01.124721 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 04:21:01.124732 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-02 04:21:01.124743 | orchestrator | 2026-02-02 04:21:01.124754 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-02 04:21:01.124764 | orchestrator | Monday 02 February 2026 04:20:50 +0000 (0:00:03.639) 0:00:19.615 ******* 2026-02-02 04:21:01.124775 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-02 04:21:01.124786 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-02 04:21:01.124797 | orchestrator | 2026-02-02 04:21:01.124808 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-02 04:21:01.124819 | orchestrator | Monday 02 February 2026 04:20:55 +0000 (0:00:05.861) 0:00:25.477 ******* 2026-02-02 04:21:01.124830 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-02 04:21:01.124841 | orchestrator | 2026-02-02 04:21:01.124852 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:21:01.124863 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:21:01.124874 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:21:01.124885 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:21:01.124896 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:21:01.124907 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:21:01.124944 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:21:01.124956 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:21:01.124967 | orchestrator | 2026-02-02 04:21:01.124978 | orchestrator | 2026-02-02 04:21:01.124989 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:21:01.125000 | orchestrator | Monday 02 February 2026 04:21:00 +0000 (0:00:04.655) 0:00:30.133 ******* 2026-02-02 04:21:01.125011 | orchestrator | =============================================================================== 2026-02-02 04:21:01.125027 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.91s 2026-02-02 04:21:01.125039 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.86s 2026-02-02 04:21:01.125050 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.66s 2026-02-02 04:21:01.125061 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.70s 2026-02-02 04:21:01.125072 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.64s 2026-02-02 04:21:01.125082 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.03s 2026-02-02 04:21:01.125093 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.53s 2026-02-02 04:21:01.125104 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.83s 2026-02-02 04:21:01.125115 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2026-02-02 04:21:03.469457 | orchestrator | 2026-02-02 04:21:03 | INFO  | Task 27e3be9e-c87f-4269-8278-8e7d20c12cc9 (gnocchi) was prepared for execution. 2026-02-02 04:21:03.469575 | orchestrator | 2026-02-02 04:21:03 | INFO  | It takes a moment until task 27e3be9e-c87f-4269-8278-8e7d20c12cc9 (gnocchi) has been started and output is visible here. 2026-02-02 04:21:08.597067 | orchestrator | 2026-02-02 04:21:08.597232 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:21:08.597252 | orchestrator | 2026-02-02 04:21:08.597308 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:21:08.597323 | orchestrator | Monday 02 February 2026 04:21:07 +0000 (0:00:00.261) 0:00:00.261 ******* 2026-02-02 04:21:08.597336 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:21:08.597348 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:21:08.597360 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:21:08.597371 | orchestrator | 2026-02-02 04:21:08.597383 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:21:08.597394 | orchestrator | Monday 02 February 2026 04:21:07 +0000 (0:00:00.306) 0:00:00.568 ******* 2026-02-02 04:21:08.597406 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-02 04:21:08.597417 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-02 04:21:08.597429 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-02 04:21:08.597441 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-02 04:21:08.597452 | orchestrator | 2026-02-02 04:21:08.597463 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-02 04:21:08.597474 | orchestrator | skipping: no hosts matched 2026-02-02 04:21:08.597505 | orchestrator | 2026-02-02 04:21:08.597516 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:21:08.597539 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:21:08.597552 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:21:08.597563 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:21:08.597597 | orchestrator | 2026-02-02 04:21:08.597609 | orchestrator | 2026-02-02 04:21:08.597623 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:21:08.597637 | orchestrator | Monday 02 February 2026 04:21:08 +0000 (0:00:00.354) 0:00:00.923 ******* 2026-02-02 04:21:08.597650 | orchestrator | =============================================================================== 2026-02-02 04:21:08.597663 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2026-02-02 04:21:08.597675 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-02-02 04:21:10.932951 | orchestrator | 2026-02-02 04:21:10 | INFO  | Task d2683e47-84bd-4c24-953d-5eb6493d6ed5 (manila) was prepared for execution. 2026-02-02 04:21:10.933052 | orchestrator | 2026-02-02 04:21:10 | INFO  | It takes a moment until task d2683e47-84bd-4c24-953d-5eb6493d6ed5 (manila) has been started and output is visible here. 2026-02-02 04:21:50.012501 | orchestrator | 2026-02-02 04:21:50.012620 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:21:50.012706 | orchestrator | 2026-02-02 04:21:50.012756 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:21:50.012770 | orchestrator | Monday 02 February 2026 04:21:15 +0000 (0:00:00.267) 0:00:00.267 ******* 2026-02-02 04:21:50.012782 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:21:50.012795 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:21:50.012806 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:21:50.012817 | orchestrator | 2026-02-02 04:21:50.012828 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:21:50.012840 | orchestrator | Monday 02 February 2026 04:21:15 +0000 (0:00:00.307) 0:00:00.574 ******* 2026-02-02 04:21:50.012851 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-02 04:21:50.012862 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-02 04:21:50.012873 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-02 04:21:50.012884 | orchestrator | 2026-02-02 04:21:50.012895 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-02 04:21:50.012906 | orchestrator | 2026-02-02 04:21:50.012917 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-02 04:21:50.012928 | orchestrator | Monday 02 February 2026 04:21:15 +0000 (0:00:00.432) 0:00:01.007 ******* 2026-02-02 04:21:50.012957 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:21:50.012969 | orchestrator | 2026-02-02 04:21:50.012981 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-02 04:21:50.012992 | orchestrator | Monday 02 February 2026 04:21:16 +0000 (0:00:00.538) 0:00:01.545 ******* 2026-02-02 04:21:50.013003 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:21:50.013015 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:21:50.013026 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:21:50.013037 | orchestrator | 2026-02-02 04:21:50.013048 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-02 04:21:50.013059 | orchestrator | Monday 02 February 2026 04:21:16 +0000 (0:00:00.438) 0:00:01.984 ******* 2026-02-02 04:21:50.013070 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-02 04:21:50.013082 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-02 04:21:50.013092 | orchestrator | 2026-02-02 04:21:50.013103 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-02 04:21:50.013114 | orchestrator | Monday 02 February 2026 04:21:22 +0000 (0:00:05.937) 0:00:07.921 ******* 2026-02-02 04:21:50.013126 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-02 04:21:50.013138 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-02 04:21:50.013172 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-02 04:21:50.013184 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-02 04:21:50.013224 | orchestrator | 2026-02-02 04:21:50.013237 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-02 04:21:50.013249 | orchestrator | Monday 02 February 2026 04:21:34 +0000 (0:00:11.631) 0:00:19.553 ******* 2026-02-02 04:21:50.013267 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 04:21:50.013286 | orchestrator | 2026-02-02 04:21:50.013303 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-02 04:21:50.013320 | orchestrator | Monday 02 February 2026 04:21:37 +0000 (0:00:03.169) 0:00:22.723 ******* 2026-02-02 04:21:50.013338 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 04:21:50.013356 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-02 04:21:50.013374 | orchestrator | 2026-02-02 04:21:50.013392 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-02 04:21:50.013410 | orchestrator | Monday 02 February 2026 04:21:41 +0000 (0:00:03.712) 0:00:26.436 ******* 2026-02-02 04:21:50.013428 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 04:21:50.013447 | orchestrator | 2026-02-02 04:21:50.013466 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-02 04:21:50.013484 | orchestrator | Monday 02 February 2026 04:21:44 +0000 (0:00:02.992) 0:00:29.428 ******* 2026-02-02 04:21:50.013503 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-02 04:21:50.013515 | orchestrator | 2026-02-02 04:21:50.013526 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-02 04:21:50.013537 | orchestrator | Monday 02 February 2026 04:21:47 +0000 (0:00:03.648) 0:00:33.077 ******* 2026-02-02 04:21:50.013573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:21:50.013589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:21:50.013609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:21:50.013634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:21:50.013647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:21:50.013659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:21:50.013733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-02 04:21:59.983566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-02 04:21:59.983736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-02 04:21:59.983794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:21:59.983816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:21:59.983835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:21:59.983855 | orchestrator | 2026-02-02 04:21:59.983876 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-02 04:21:59.983897 | orchestrator | Monday 02 February 2026 04:21:50 +0000 (0:00:02.179) 0:00:35.256 ******* 2026-02-02 04:21:59.983917 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:21:59.983937 | orchestrator | 2026-02-02 04:21:59.983955 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-02 04:21:59.983972 | orchestrator | Monday 02 February 2026 04:21:50 +0000 (0:00:00.514) 0:00:35.771 ******* 2026-02-02 04:21:59.983988 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:21:59.984008 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:21:59.984027 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:21:59.984046 | orchestrator | 2026-02-02 04:21:59.984063 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-02 04:21:59.984083 | orchestrator | Monday 02 February 2026 04:21:51 +0000 (0:00:00.910) 0:00:36.682 ******* 2026-02-02 04:21:59.984102 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-02 04:21:59.984150 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-02 04:21:59.984171 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-02 04:21:59.984240 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-02 04:21:59.984264 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-02 04:21:59.984295 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-02 04:21:59.984316 | orchestrator | 2026-02-02 04:21:59.984336 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-02 04:21:59.984355 | orchestrator | Monday 02 February 2026 04:21:53 +0000 (0:00:01.726) 0:00:38.408 ******* 2026-02-02 04:21:59.984374 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-02 04:21:59.984393 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-02 04:21:59.984410 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-02 04:21:59.984429 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-02 04:21:59.984448 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-02 04:21:59.984467 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-02 04:21:59.984486 | orchestrator | 2026-02-02 04:21:59.984504 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-02 04:21:59.984522 | orchestrator | Monday 02 February 2026 04:21:54 +0000 (0:00:01.179) 0:00:39.587 ******* 2026-02-02 04:21:59.984540 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-02 04:21:59.984558 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-02 04:21:59.984575 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-02 04:21:59.984592 | orchestrator | 2026-02-02 04:21:59.984608 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-02 04:21:59.984624 | orchestrator | Monday 02 February 2026 04:21:55 +0000 (0:00:00.651) 0:00:40.239 ******* 2026-02-02 04:21:59.984642 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:21:59.984659 | orchestrator | 2026-02-02 04:21:59.984676 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-02 04:21:59.984694 | orchestrator | Monday 02 February 2026 04:21:55 +0000 (0:00:00.117) 0:00:40.357 ******* 2026-02-02 04:21:59.984712 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:21:59.984728 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:21:59.984746 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:21:59.984764 | orchestrator | 2026-02-02 04:21:59.984783 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-02 04:21:59.984803 | orchestrator | Monday 02 February 2026 04:21:55 +0000 (0:00:00.504) 0:00:40.861 ******* 2026-02-02 04:21:59.984823 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:21:59.984843 | orchestrator | 2026-02-02 04:21:59.984863 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-02 04:21:59.984882 | orchestrator | Monday 02 February 2026 04:21:56 +0000 (0:00:00.567) 0:00:41.429 ******* 2026-02-02 04:21:59.984938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:22:00.830767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:22:00.830891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:22:00.830916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:00.831008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:00.831028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:00.831093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:00.831122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:00.831141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:00.831158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:00.831176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:00.831194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:00.831297 | orchestrator | 2026-02-02 04:22:00.831317 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-02 04:22:00.831336 | orchestrator | Monday 02 February 2026 04:22:00 +0000 (0:00:03.825) 0:00:45.254 ******* 2026-02-02 04:22:00.831368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-02 04:22:01.436375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:22:01.436482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 04:22:01.436498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 04:22:01.436511 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:22:01.436525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-02 04:22:01.436590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:22:01.436604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 04:22:01.436641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 04:22:01.436654 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:22:01.436665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-02 04:22:01.436677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:22:01.436689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 04:22:01.436708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 04:22:01.436719 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:22:01.436730 | orchestrator | 2026-02-02 04:22:01.436743 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-02 04:22:01.436755 | orchestrator | Monday 02 February 2026 04:22:00 +0000 (0:00:00.834) 0:00:46.089 ******* 2026-02-02 04:22:01.436780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-02 04:22:05.814271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:22:05.814385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 04:22:05.814401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 04:22:05.814432 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:22:05.814450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-02 04:22:05.814464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:22:05.814479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 04:22:05.814523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 04:22:05.814533 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:22:05.814542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-02 04:22:05.814558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:22:05.814566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 04:22:05.814574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 04:22:05.814582 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:22:05.814590 | orchestrator | 2026-02-02 04:22:05.814599 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-02 04:22:05.814610 | orchestrator | Monday 02 February 2026 04:22:01 +0000 (0:00:00.811) 0:00:46.901 ******* 2026-02-02 04:22:05.814630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:22:12.428919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:22:12.429056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:22:12.429074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:12.429088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:12.429100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:12.429143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:12.429166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:12.429198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:12.429285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:12.429307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:12.429326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:12.429344 | orchestrator | 2026-02-02 04:22:12.429358 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-02 04:22:12.429372 | orchestrator | Monday 02 February 2026 04:22:06 +0000 (0:00:04.367) 0:00:51.268 ******* 2026-02-02 04:22:12.429400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:22:16.564960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:22:16.565189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:22:16.565327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:16.565345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 04:22:16.565374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:16.565406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 04:22:16.565522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:16.565541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 04:22:16.565555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:16.565569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:16.565582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:16.565596 | orchestrator | 2026-02-02 04:22:16.565611 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-02 04:22:16.565626 | orchestrator | Monday 02 February 2026 04:22:12 +0000 (0:00:06.419) 0:00:57.687 ******* 2026-02-02 04:22:16.565641 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-02 04:22:16.565690 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-02 04:22:16.565704 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-02 04:22:16.565715 | orchestrator | 2026-02-02 04:22:16.565726 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-02 04:22:16.565746 | orchestrator | Monday 02 February 2026 04:22:16 +0000 (0:00:03.492) 0:01:01.180 ******* 2026-02-02 04:22:16.565769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-02 04:22:19.758264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:22:19.758348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 04:22:19.758358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 04:22:19.758367 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:22:19.758379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-02 04:22:19.758403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:22:19.758428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 04:22:19.758454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 04:22:19.758464 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:22:19.758473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-02 04:22:19.758482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 04:22:19.758491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 04:22:19.758511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 04:22:19.758520 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:22:19.758530 | orchestrator | 2026-02-02 04:22:19.758539 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-02 04:22:19.758550 | orchestrator | Monday 02 February 2026 04:22:16 +0000 (0:00:00.644) 0:01:01.825 ******* 2026-02-02 04:22:19.758567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:22:57.855467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:22:57.855584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-02 04:22:57.855603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:57.855654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:57.855667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:57.855697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:57.855711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:57.855723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:57.855734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:57.855760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:57.855772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-02 04:22:57.855784 | orchestrator | 2026-02-02 04:22:57.855798 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-02 04:22:57.855811 | orchestrator | Monday 02 February 2026 04:22:19 +0000 (0:00:03.206) 0:01:05.031 ******* 2026-02-02 04:22:57.855822 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:22:57.855835 | orchestrator | 2026-02-02 04:22:57.855846 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-02 04:22:57.855857 | orchestrator | Monday 02 February 2026 04:22:21 +0000 (0:00:02.029) 0:01:07.060 ******* 2026-02-02 04:22:57.855868 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:22:57.855879 | orchestrator | 2026-02-02 04:22:57.855890 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-02 04:22:57.855901 | orchestrator | Monday 02 February 2026 04:22:24 +0000 (0:00:02.120) 0:01:09.181 ******* 2026-02-02 04:22:57.855912 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:22:57.855923 | orchestrator | 2026-02-02 04:22:57.855937 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-02 04:22:57.855949 | orchestrator | Monday 02 February 2026 04:22:57 +0000 (0:00:33.549) 0:01:42.731 ******* 2026-02-02 04:22:57.855962 | orchestrator | 2026-02-02 04:22:57.855982 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-02 04:23:52.448391 | orchestrator | Monday 02 February 2026 04:22:57 +0000 (0:00:00.105) 0:01:42.836 ******* 2026-02-02 04:23:52.448555 | orchestrator | 2026-02-02 04:23:52.448575 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-02 04:23:52.448595 | orchestrator | Monday 02 February 2026 04:22:57 +0000 (0:00:00.077) 0:01:42.913 ******* 2026-02-02 04:23:52.448614 | orchestrator | 2026-02-02 04:23:52.448633 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-02 04:23:52.448653 | orchestrator | Monday 02 February 2026 04:22:57 +0000 (0:00:00.095) 0:01:43.009 ******* 2026-02-02 04:23:52.448671 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:23:52.448684 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:23:52.448695 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:23:52.448706 | orchestrator | 2026-02-02 04:23:52.448717 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-02 04:23:52.448728 | orchestrator | Monday 02 February 2026 04:23:12 +0000 (0:00:14.958) 0:01:57.967 ******* 2026-02-02 04:23:52.448739 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:23:52.448751 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:23:52.448762 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:23:52.448772 | orchestrator | 2026-02-02 04:23:52.448784 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-02 04:23:52.448849 | orchestrator | Monday 02 February 2026 04:23:23 +0000 (0:00:10.592) 0:02:08.560 ******* 2026-02-02 04:23:52.448862 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:23:52.448872 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:23:52.448883 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:23:52.448894 | orchestrator | 2026-02-02 04:23:52.448905 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-02 04:23:52.448916 | orchestrator | Monday 02 February 2026 04:23:33 +0000 (0:00:10.052) 0:02:18.613 ******* 2026-02-02 04:23:52.448927 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:23:52.448940 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:23:52.448952 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:23:52.448964 | orchestrator | 2026-02-02 04:23:52.448978 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:23:52.448992 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 04:23:52.449007 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 04:23:52.449021 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 04:23:52.449033 | orchestrator | 2026-02-02 04:23:52.449046 | orchestrator | 2026-02-02 04:23:52.449059 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:23:52.449072 | orchestrator | Monday 02 February 2026 04:23:52 +0000 (0:00:18.562) 0:02:37.175 ******* 2026-02-02 04:23:52.449086 | orchestrator | =============================================================================== 2026-02-02 04:23:52.449097 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 33.55s 2026-02-02 04:23:52.449108 | orchestrator | manila : Restart manila-share container -------------------------------- 18.56s 2026-02-02 04:23:52.449119 | orchestrator | manila : Restart manila-api container ---------------------------------- 14.96s 2026-02-02 04:23:52.449130 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 11.63s 2026-02-02 04:23:52.449141 | orchestrator | manila : Restart manila-data container --------------------------------- 10.59s 2026-02-02 04:23:52.449167 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.05s 2026-02-02 04:23:52.449178 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.42s 2026-02-02 04:23:52.449189 | orchestrator | service-ks-register : manila | Creating services ------------------------ 5.94s 2026-02-02 04:23:52.449200 | orchestrator | manila : Copying over config.json files for services -------------------- 4.37s 2026-02-02 04:23:52.449210 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 3.83s 2026-02-02 04:23:52.449221 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.71s 2026-02-02 04:23:52.449232 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.65s 2026-02-02 04:23:52.449243 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.49s 2026-02-02 04:23:52.449254 | orchestrator | manila : Check manila containers ---------------------------------------- 3.21s 2026-02-02 04:23:52.449265 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.17s 2026-02-02 04:23:52.449276 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 2.99s 2026-02-02 04:23:52.449287 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.18s 2026-02-02 04:23:52.449297 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.12s 2026-02-02 04:23:52.449308 | orchestrator | manila : Creating Manila database --------------------------------------- 2.03s 2026-02-02 04:23:52.449319 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.73s 2026-02-02 04:23:52.787304 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-02 04:24:05.001580 | orchestrator | 2026-02-02 04:24:04 | INFO  | Task d9368f21-9971-49ff-a498-9088a923512c (netdata) was prepared for execution. 2026-02-02 04:24:05.001691 | orchestrator | 2026-02-02 04:24:04 | INFO  | It takes a moment until task d9368f21-9971-49ff-a498-9088a923512c (netdata) has been started and output is visible here. 2026-02-02 04:25:38.817922 | orchestrator | 2026-02-02 04:25:38.818092 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:25:38.818111 | orchestrator | 2026-02-02 04:25:38.818123 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:25:38.818134 | orchestrator | Monday 02 February 2026 04:24:09 +0000 (0:00:00.253) 0:00:00.253 ******* 2026-02-02 04:25:38.818144 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-02 04:25:38.818155 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-02 04:25:38.818165 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-02 04:25:38.818183 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-02 04:25:38.818194 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-02 04:25:38.818203 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-02 04:25:38.818213 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-02 04:25:38.818222 | orchestrator | 2026-02-02 04:25:38.818232 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-02 04:25:38.818242 | orchestrator | 2026-02-02 04:25:38.818252 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-02 04:25:38.818261 | orchestrator | Monday 02 February 2026 04:24:10 +0000 (0:00:00.824) 0:00:01.078 ******* 2026-02-02 04:25:38.818273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 04:25:38.818285 | orchestrator | 2026-02-02 04:25:38.818295 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-02 04:25:38.818305 | orchestrator | Monday 02 February 2026 04:24:11 +0000 (0:00:01.277) 0:00:02.355 ******* 2026-02-02 04:25:38.818315 | orchestrator | ok: [testbed-manager] 2026-02-02 04:25:38.818326 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:25:38.818337 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:25:38.818346 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:25:38.818356 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:25:38.818365 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:25:38.818375 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:25:38.818385 | orchestrator | 2026-02-02 04:25:38.818414 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-02 04:25:38.818424 | orchestrator | Monday 02 February 2026 04:24:13 +0000 (0:00:01.802) 0:00:04.158 ******* 2026-02-02 04:25:38.818434 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:25:38.818443 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:25:38.818453 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:25:38.818463 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:25:38.818472 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:25:38.818482 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:25:38.818492 | orchestrator | ok: [testbed-manager] 2026-02-02 04:25:38.818502 | orchestrator | 2026-02-02 04:25:38.818512 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-02 04:25:38.818521 | orchestrator | Monday 02 February 2026 04:24:15 +0000 (0:00:02.070) 0:00:06.228 ******* 2026-02-02 04:25:38.818531 | orchestrator | changed: [testbed-manager] 2026-02-02 04:25:38.818541 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:25:38.818551 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:25:38.818560 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:25:38.818570 | orchestrator | changed: [testbed-node-3] 2026-02-02 04:25:38.818603 | orchestrator | changed: [testbed-node-4] 2026-02-02 04:25:38.818639 | orchestrator | changed: [testbed-node-5] 2026-02-02 04:25:38.818649 | orchestrator | 2026-02-02 04:25:38.818658 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-02 04:25:38.818682 | orchestrator | Monday 02 February 2026 04:24:16 +0000 (0:00:01.464) 0:00:07.693 ******* 2026-02-02 04:25:38.818692 | orchestrator | changed: [testbed-manager] 2026-02-02 04:25:38.818702 | orchestrator | changed: [testbed-node-3] 2026-02-02 04:25:38.818712 | orchestrator | changed: [testbed-node-4] 2026-02-02 04:25:38.818721 | orchestrator | changed: [testbed-node-5] 2026-02-02 04:25:38.818731 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:25:38.818741 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:25:38.818750 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:25:38.818760 | orchestrator | 2026-02-02 04:25:38.818770 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-02 04:25:38.818779 | orchestrator | Monday 02 February 2026 04:24:32 +0000 (0:00:15.789) 0:00:23.482 ******* 2026-02-02 04:25:38.818789 | orchestrator | changed: [testbed-node-3] 2026-02-02 04:25:38.818799 | orchestrator | changed: [testbed-node-4] 2026-02-02 04:25:38.818808 | orchestrator | changed: [testbed-node-5] 2026-02-02 04:25:38.818818 | orchestrator | changed: [testbed-manager] 2026-02-02 04:25:38.818827 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:25:38.818837 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:25:38.818846 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:25:38.818856 | orchestrator | 2026-02-02 04:25:38.818866 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-02 04:25:38.818875 | orchestrator | Monday 02 February 2026 04:25:11 +0000 (0:00:39.225) 0:01:02.708 ******* 2026-02-02 04:25:38.818886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 04:25:38.818898 | orchestrator | 2026-02-02 04:25:38.818908 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-02 04:25:38.818917 | orchestrator | Monday 02 February 2026 04:25:13 +0000 (0:00:01.533) 0:01:04.241 ******* 2026-02-02 04:25:38.818927 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-02 04:25:38.818937 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-02 04:25:38.818947 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-02 04:25:38.818957 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-02 04:25:38.818985 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-02 04:25:38.818996 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-02 04:25:38.819006 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-02 04:25:38.819016 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-02 04:25:38.819025 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-02 04:25:38.819035 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-02 04:25:38.819045 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-02 04:25:38.819055 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-02 04:25:38.819064 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-02 04:25:38.819074 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-02 04:25:38.819084 | orchestrator | 2026-02-02 04:25:38.819093 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-02 04:25:38.819104 | orchestrator | Monday 02 February 2026 04:25:16 +0000 (0:00:03.223) 0:01:07.464 ******* 2026-02-02 04:25:38.819114 | orchestrator | ok: [testbed-manager] 2026-02-02 04:25:38.819124 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:25:38.819134 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:25:38.819144 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:25:38.819161 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:25:38.819171 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:25:38.819180 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:25:38.819190 | orchestrator | 2026-02-02 04:25:38.819200 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-02 04:25:38.819209 | orchestrator | Monday 02 February 2026 04:25:17 +0000 (0:00:01.193) 0:01:08.658 ******* 2026-02-02 04:25:38.819219 | orchestrator | changed: [testbed-manager] 2026-02-02 04:25:38.819229 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:25:38.819238 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:25:38.819248 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:25:38.819258 | orchestrator | changed: [testbed-node-3] 2026-02-02 04:25:38.819267 | orchestrator | changed: [testbed-node-4] 2026-02-02 04:25:38.819277 | orchestrator | changed: [testbed-node-5] 2026-02-02 04:25:38.819287 | orchestrator | 2026-02-02 04:25:38.819296 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-02 04:25:38.819306 | orchestrator | Monday 02 February 2026 04:25:19 +0000 (0:00:01.319) 0:01:09.978 ******* 2026-02-02 04:25:38.819316 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:25:38.819325 | orchestrator | ok: [testbed-manager] 2026-02-02 04:25:38.819335 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:25:38.819345 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:25:38.819354 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:25:38.819364 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:25:38.819373 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:25:38.819383 | orchestrator | 2026-02-02 04:25:38.819393 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-02 04:25:38.819403 | orchestrator | Monday 02 February 2026 04:25:20 +0000 (0:00:01.113) 0:01:11.091 ******* 2026-02-02 04:25:38.819412 | orchestrator | ok: [testbed-manager] 2026-02-02 04:25:38.819422 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:25:38.819431 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:25:38.819441 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:25:38.819450 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:25:38.819460 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:25:38.819469 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:25:38.819479 | orchestrator | 2026-02-02 04:25:38.819489 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-02 04:25:38.819499 | orchestrator | Monday 02 February 2026 04:25:22 +0000 (0:00:02.565) 0:01:13.656 ******* 2026-02-02 04:25:38.819508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-02 04:25:38.819525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 04:25:38.819536 | orchestrator | 2026-02-02 04:25:38.819546 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-02 04:25:38.819556 | orchestrator | Monday 02 February 2026 04:25:24 +0000 (0:00:01.378) 0:01:15.034 ******* 2026-02-02 04:25:38.819565 | orchestrator | changed: [testbed-manager] 2026-02-02 04:25:38.819575 | orchestrator | 2026-02-02 04:25:38.819585 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-02 04:25:38.819594 | orchestrator | Monday 02 February 2026 04:25:27 +0000 (0:00:03.106) 0:01:18.141 ******* 2026-02-02 04:25:38.819604 | orchestrator | changed: [testbed-manager] 2026-02-02 04:25:38.819644 | orchestrator | changed: [testbed-node-3] 2026-02-02 04:25:38.819654 | orchestrator | changed: [testbed-node-4] 2026-02-02 04:25:38.819663 | orchestrator | changed: [testbed-node-5] 2026-02-02 04:25:38.819673 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:25:38.819683 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:25:38.819692 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:25:38.819702 | orchestrator | 2026-02-02 04:25:38.819712 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:25:38.819728 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:25:38.819739 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:25:38.819749 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:25:38.819759 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:25:38.819775 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:25:39.271867 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:25:39.271975 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:25:39.271991 | orchestrator | 2026-02-02 04:25:39.272004 | orchestrator | 2026-02-02 04:25:39.272017 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:25:39.272031 | orchestrator | Monday 02 February 2026 04:25:38 +0000 (0:00:11.578) 0:01:29.719 ******* 2026-02-02 04:25:39.272042 | orchestrator | =============================================================================== 2026-02-02 04:25:39.272053 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.23s 2026-02-02 04:25:39.272064 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.79s 2026-02-02 04:25:39.272075 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.58s 2026-02-02 04:25:39.272085 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.22s 2026-02-02 04:25:39.272096 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.11s 2026-02-02 04:25:39.272107 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.57s 2026-02-02 04:25:39.272118 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.07s 2026-02-02 04:25:39.272129 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.80s 2026-02-02 04:25:39.272140 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.53s 2026-02-02 04:25:39.272150 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.46s 2026-02-02 04:25:39.272161 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.38s 2026-02-02 04:25:39.272172 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.32s 2026-02-02 04:25:39.272182 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.28s 2026-02-02 04:25:39.272194 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.19s 2026-02-02 04:25:39.272206 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.11s 2026-02-02 04:25:39.272216 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-02-02 04:25:41.631121 | orchestrator | 2026-02-02 04:25:41 | INFO  | Task c9032864-a071-4006-b68b-d3d7d85e8007 (prometheus) was prepared for execution. 2026-02-02 04:25:41.631237 | orchestrator | 2026-02-02 04:25:41 | INFO  | It takes a moment until task c9032864-a071-4006-b68b-d3d7d85e8007 (prometheus) has been started and output is visible here. 2026-02-02 04:25:50.877832 | orchestrator | 2026-02-02 04:25:50.877929 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:25:50.877940 | orchestrator | 2026-02-02 04:25:50.877945 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:25:50.877965 | orchestrator | Monday 02 February 2026 04:25:45 +0000 (0:00:00.278) 0:00:00.278 ******* 2026-02-02 04:25:50.877970 | orchestrator | ok: [testbed-manager] 2026-02-02 04:25:50.877977 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:25:50.877981 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:25:50.877996 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:25:50.878001 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:25:50.878006 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:25:50.878011 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:25:50.878053 | orchestrator | 2026-02-02 04:25:50.878059 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:25:50.878064 | orchestrator | Monday 02 February 2026 04:25:46 +0000 (0:00:00.836) 0:00:01.115 ******* 2026-02-02 04:25:50.878069 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-02 04:25:50.878075 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-02 04:25:50.878079 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-02 04:25:50.878084 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-02 04:25:50.878088 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-02 04:25:50.878093 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-02 04:25:50.878098 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-02 04:25:50.878102 | orchestrator | 2026-02-02 04:25:50.878107 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-02 04:25:50.878111 | orchestrator | 2026-02-02 04:25:50.878116 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-02 04:25:50.878121 | orchestrator | Monday 02 February 2026 04:25:47 +0000 (0:00:00.895) 0:00:02.011 ******* 2026-02-02 04:25:50.878126 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 04:25:50.878133 | orchestrator | 2026-02-02 04:25:50.878137 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-02 04:25:50.878142 | orchestrator | Monday 02 February 2026 04:25:49 +0000 (0:00:01.353) 0:00:03.364 ******* 2026-02-02 04:25:50.878149 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-02 04:25:50.878158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:50.878167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:50.878181 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:50.878211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:50.878221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:50.878230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:50.878237 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:50.878245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:50.878252 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:50.878260 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:50.878279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:51.910749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:51.910866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:51.910885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:51.910899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:51.910911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:51.910925 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-02 04:25:51.910979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:51.910999 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 04:25:51.911013 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 04:25:51.911025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:51.911036 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 04:25:51.911047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:51.911066 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:51.911077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:51.911101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:56.548417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:56.548531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:56.548548 | orchestrator | 2026-02-02 04:25:56.548563 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-02 04:25:56.548576 | orchestrator | Monday 02 February 2026 04:25:51 +0000 (0:00:02.886) 0:00:06.251 ******* 2026-02-02 04:25:56.548588 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 04:25:56.548601 | orchestrator | 2026-02-02 04:25:56.548612 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-02 04:25:56.548623 | orchestrator | Monday 02 February 2026 04:25:53 +0000 (0:00:01.625) 0:00:07.877 ******* 2026-02-02 04:25:56.548666 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-02 04:25:56.548702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:56.548714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:56.548726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:56.548771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:56.548785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:56.548796 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:56.548807 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:25:56.548832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:56.548853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:56.548874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:56.548892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:56.548932 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:58.635089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:58.635196 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:58.635239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:58.635253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:58.635264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:58.635276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 04:25:58.635303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 04:25:58.635334 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-02 04:25:58.635349 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 04:25:58.635368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:58.635380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:58.635391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:25:58.635402 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:58.635414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:58.635434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:59.446179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:25:59.446305 | orchestrator | 2026-02-02 04:25:59.446323 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-02 04:25:59.446337 | orchestrator | Monday 02 February 2026 04:25:58 +0000 (0:00:05.093) 0:00:12.970 ******* 2026-02-02 04:25:59.446352 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-02 04:25:59.446366 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:25:59.446378 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:25:59.446432 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-02 04:25:59.446466 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:25:59.446479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:25:59.446500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:25:59.446512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:25:59.446524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:25:59.446535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:25:59.446547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:25:59.446563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:25:59.446583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:00.126004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:26:00.126192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:00.126209 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:26:00.126225 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:26:00.126236 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:26:00.126248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:26:00.126260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:00.126272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:00.126299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:26:00.126312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:00.126343 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:26:00.126373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:26:00.126386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:26:00.126398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 04:26:00.126409 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:26:00.126420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:26:00.126432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:26:00.126443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 04:26:00.126455 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:26:00.126472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:26:00.126500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:26:01.191308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 04:26:01.191411 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:26:01.191429 | orchestrator | 2026-02-02 04:26:01.191442 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-02 04:26:01.191455 | orchestrator | Monday 02 February 2026 04:26:00 +0000 (0:00:01.496) 0:00:14.466 ******* 2026-02-02 04:26:01.191468 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-02 04:26:01.191482 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:26:01.191494 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:26:01.191526 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-02 04:26:01.191581 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:01.191595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:26:01.191607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:01.191618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:01.191630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:26:01.191704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:01.191722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:26:01.191742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:01.191764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:02.509616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:26:02.509777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:02.509799 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:26:02.509810 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:26:02.509817 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:26:02.509824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:26:02.509832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:02.509839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:02.509878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:26:02.509885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 04:26:02.509891 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:26:02.509914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:26:02.509921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:26:02.509928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 04:26:02.509934 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:26:02.509941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:26:02.509947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:26:02.509963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 04:26:02.509991 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:26:02.509999 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 04:26:02.510059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 04:26:05.955153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 04:26:05.955234 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:26:05.955244 | orchestrator | 2026-02-02 04:26:05.955252 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-02 04:26:05.955260 | orchestrator | Monday 02 February 2026 04:26:02 +0000 (0:00:02.376) 0:00:16.842 ******* 2026-02-02 04:26:05.955267 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-02 04:26:05.955275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:26:05.955299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:26:05.955317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:26:05.955324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:26:05.955340 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:26:05.955347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:26:05.955353 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:26:05.955360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:26:05.955379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:26:05.955386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:26:05.955396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:26:05.955402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:26:05.955413 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:26:08.687708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:26:08.687803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:26:08.687826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:26:08.687832 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 04:26:08.687847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:26:08.687852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 04:26:08.687881 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-02 04:26:08.687896 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 04:26:08.687903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:26:08.687917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:26:08.687924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:26:08.687936 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:26:08.687945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:26:08.687953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:26:08.687963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:26:12.311259 | orchestrator | 2026-02-02 04:26:12.311366 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-02 04:26:12.311382 | orchestrator | Monday 02 February 2026 04:26:08 +0000 (0:00:06.176) 0:00:23.019 ******* 2026-02-02 04:26:12.311399 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 04:26:12.311416 | orchestrator | 2026-02-02 04:26:12.311428 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-02 04:26:12.311463 | orchestrator | Monday 02 February 2026 04:26:09 +0000 (0:00:00.876) 0:00:23.896 ******* 2026-02-02 04:26:12.311477 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087843, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1356018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:12.311492 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087843, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1356018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:12.311504 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087843, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1356018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:12.311531 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087843, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1356018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:12.311543 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087843, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1356018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:12.311556 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087870, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.140137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:12.311587 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087843, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1356018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:12.311607 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087870, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.140137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:12.311619 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087870, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.140137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:12.311630 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087843, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1356018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:12.311647 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087870, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.140137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:12.311721 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087829, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1346157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:12.311734 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087870, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.140137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:12.311766 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087829, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1346157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029492 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087861, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1385849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029608 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087829, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1346157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029623 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087829, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1346157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029650 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087870, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.140137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029708 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087821, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1329188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029719 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087829, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1346157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029751 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087861, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1385849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029779 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087846, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.136082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029790 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087861, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1385849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029801 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087861, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1385849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029816 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087870, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.140137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:14.029827 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087859, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1381257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029837 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087861, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1385849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029855 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087829, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1346157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:14.029872 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087821, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1329188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455229 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087821, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1329188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455316 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087821, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1329188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455346 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087849, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1365576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455359 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087821, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1329188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455371 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087846, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.136082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455400 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087846, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.136082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455413 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087861, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1385849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455440 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087846, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.136082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455453 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087859, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1381257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455470 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087840, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1350646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455481 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087859, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1381257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455500 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087821, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1329188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455512 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087846, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.136082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455523 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087849, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1365576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:15.455542 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087829, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1346157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:16.805039 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087859, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1381257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805135 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087849, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1365576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805150 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087869, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1395578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805177 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087859, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1381257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805188 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087846, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.136082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805198 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087849, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1365576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805209 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087840, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1350646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805235 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087840, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1350646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805251 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087849, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1365576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805261 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087859, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1381257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805277 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087815, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1312826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805288 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087840, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1350646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805298 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087893, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1431165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805308 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087869, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1395578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:16.805325 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087869, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1395578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887384 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087840, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1350646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887452 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087861, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1385849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:17.887479 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087866, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1394885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887489 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087869, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1395578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887498 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087849, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1365576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887507 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087815, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1312826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887515 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087869, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1395578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887539 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087815, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1312826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887554 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087815, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1312826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887563 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087826, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1332195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887571 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087815, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1312826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887579 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087840, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1350646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887587 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087893, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1431165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887596 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087893, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1431165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:17.887617 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087893, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1431165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082292 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087893, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1431165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082393 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087820, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1319249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082408 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087821, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1329188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:19.082421 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087869, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1395578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082432 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087866, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1394885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082445 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087866, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1394885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082471 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087866, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1394885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082522 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087856, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1375577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082535 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087866, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1394885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082547 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087826, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1332195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082565 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087826, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1332195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082585 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087826, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1332195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082605 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087815, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1312826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082643 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087820, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1319249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:19.082769 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087852, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1373363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.377837 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087820, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1319249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.377934 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087820, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1319249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.377951 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087846, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.136082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:20.377964 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087826, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1332195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.377976 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087856, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1375577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.378076 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087856, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1375577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.378095 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087893, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1431165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.378124 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087856, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1375577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.378137 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087887, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1428058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.378150 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:26:20.378164 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087820, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1319249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.378176 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087852, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1373363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.378188 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087852, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1373363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.378213 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087852, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1373363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.378226 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087856, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1375577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:20.378244 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087866, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1394885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:26.748492 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087859, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1381257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:26.748590 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087887, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1428058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:26.748603 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:26:26.748613 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087887, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1428058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:26.748643 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:26:26.748651 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087887, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1428058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:26.748658 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:26:26.748704 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087826, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1332195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:26.748714 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087852, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1373363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:26.748737 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087820, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1319249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:26.748745 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087887, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1428058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:26.748753 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:26:26.748760 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087856, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1375577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:26.748769 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087849, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1365576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:26.748782 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087852, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1373363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:26.748793 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087887, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1428058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-02 04:26:26.748801 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:26:26.748808 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087840, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1350646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:26.748822 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087869, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1395578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:52.261801 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087815, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1312826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:52.261928 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087893, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1431165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:52.261983 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087866, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1394885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:52.262014 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087826, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1332195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:52.262092 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087820, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1319249, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:52.262107 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087856, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1375577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:52.262120 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087852, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1373363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:52.262153 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087887, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1428058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-02 04:26:52.262167 | orchestrator | 2026-02-02 04:26:52.262182 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-02 04:26:52.262199 | orchestrator | Monday 02 February 2026 04:26:32 +0000 (0:00:23.328) 0:00:47.224 ******* 2026-02-02 04:26:52.262212 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 04:26:52.262227 | orchestrator | 2026-02-02 04:26:52.262240 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-02 04:26:52.262265 | orchestrator | Monday 02 February 2026 04:26:33 +0000 (0:00:00.707) 0:00:47.932 ******* 2026-02-02 04:26:52.262281 | orchestrator | [WARNING]: Skipped 2026-02-02 04:26:52.262298 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262313 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-02 04:26:52.262328 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262342 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-02 04:26:52.262358 | orchestrator | [WARNING]: Skipped 2026-02-02 04:26:52.262372 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262386 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-02 04:26:52.262400 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262414 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-02 04:26:52.262428 | orchestrator | [WARNING]: Skipped 2026-02-02 04:26:52.262442 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262455 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-02 04:26:52.262469 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262484 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-02 04:26:52.262498 | orchestrator | [WARNING]: Skipped 2026-02-02 04:26:52.262512 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262526 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-02 04:26:52.262539 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262552 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-02 04:26:52.262565 | orchestrator | [WARNING]: Skipped 2026-02-02 04:26:52.262579 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262593 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-02 04:26:52.262605 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262619 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-02 04:26:52.262633 | orchestrator | [WARNING]: Skipped 2026-02-02 04:26:52.262647 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262661 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-02 04:26:52.262674 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262696 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-02 04:26:52.262782 | orchestrator | [WARNING]: Skipped 2026-02-02 04:26:52.262797 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262811 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-02 04:26:52.262821 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-02 04:26:52.262833 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-02 04:26:52.262846 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:26:52.262859 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 04:26:52.262871 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-02 04:26:52.262883 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-02 04:26:52.262895 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-02 04:26:52.262907 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-02 04:26:52.262919 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-02 04:26:52.262931 | orchestrator | 2026-02-02 04:26:52.262944 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-02 04:26:52.262970 | orchestrator | Monday 02 February 2026 04:26:35 +0000 (0:00:01.884) 0:00:49.816 ******* 2026-02-02 04:26:52.262984 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-02 04:26:52.262998 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:26:52.263011 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-02 04:26:52.263024 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:26:52.263036 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-02 04:26:52.263050 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:26:52.263081 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-02 04:27:08.667181 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:27:08.667255 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-02 04:27:08.667262 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:27:08.667267 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-02 04:27:08.667271 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:27:08.667276 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-02 04:27:08.667280 | orchestrator | 2026-02-02 04:27:08.667285 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-02 04:27:08.667290 | orchestrator | Monday 02 February 2026 04:26:52 +0000 (0:00:16.781) 0:01:06.598 ******* 2026-02-02 04:27:08.667294 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-02 04:27:08.667298 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:27:08.667302 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-02 04:27:08.667306 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:27:08.667310 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-02 04:27:08.667313 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:27:08.667317 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-02 04:27:08.667321 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:27:08.667325 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-02 04:27:08.667329 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:27:08.667333 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-02 04:27:08.667337 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:27:08.667340 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-02 04:27:08.667344 | orchestrator | 2026-02-02 04:27:08.667348 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-02 04:27:08.667352 | orchestrator | Monday 02 February 2026 04:26:55 +0000 (0:00:02.787) 0:01:09.386 ******* 2026-02-02 04:27:08.667356 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-02 04:27:08.667361 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-02 04:27:08.667366 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:27:08.667370 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:27:08.667374 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-02 04:27:08.667378 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:27:08.667382 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-02 04:27:08.667402 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:27:08.667406 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-02 04:27:08.667410 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:27:08.667424 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-02 04:27:08.667428 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-02 04:27:08.667432 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:27:08.667436 | orchestrator | 2026-02-02 04:27:08.667440 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-02 04:27:08.667444 | orchestrator | Monday 02 February 2026 04:26:56 +0000 (0:00:01.837) 0:01:11.224 ******* 2026-02-02 04:27:08.667448 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 04:27:08.667452 | orchestrator | 2026-02-02 04:27:08.667456 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-02 04:27:08.667460 | orchestrator | Monday 02 February 2026 04:26:57 +0000 (0:00:00.737) 0:01:11.961 ******* 2026-02-02 04:27:08.667464 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:27:08.667468 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:27:08.667472 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:27:08.667476 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:27:08.667479 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:27:08.667483 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:27:08.667487 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:27:08.667491 | orchestrator | 2026-02-02 04:27:08.667495 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-02 04:27:08.667499 | orchestrator | Monday 02 February 2026 04:26:58 +0000 (0:00:00.720) 0:01:12.682 ******* 2026-02-02 04:27:08.667503 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:27:08.667506 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:27:08.667510 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:27:08.667514 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:27:08.667518 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:27:08.667522 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:27:08.667526 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:27:08.667530 | orchestrator | 2026-02-02 04:27:08.667534 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-02 04:27:08.667547 | orchestrator | Monday 02 February 2026 04:27:00 +0000 (0:00:02.104) 0:01:14.787 ******* 2026-02-02 04:27:08.667552 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 04:27:08.667556 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:27:08.667560 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 04:27:08.667564 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 04:27:08.667568 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 04:27:08.667572 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 04:27:08.667575 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:27:08.667579 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:27:08.667583 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:27:08.667587 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:27:08.667591 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 04:27:08.667595 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:27:08.667599 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-02 04:27:08.667606 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:27:08.667610 | orchestrator | 2026-02-02 04:27:08.667614 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-02 04:27:08.667618 | orchestrator | Monday 02 February 2026 04:27:01 +0000 (0:00:01.568) 0:01:16.356 ******* 2026-02-02 04:27:08.667622 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-02 04:27:08.667626 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:27:08.667630 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-02 04:27:08.667634 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:27:08.667637 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-02 04:27:08.667641 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:27:08.667645 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-02 04:27:08.667649 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:27:08.667653 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-02 04:27:08.667657 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:27:08.667661 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-02 04:27:08.667664 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:27:08.667668 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-02 04:27:08.667672 | orchestrator | 2026-02-02 04:27:08.667676 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-02 04:27:08.667680 | orchestrator | Monday 02 February 2026 04:27:03 +0000 (0:00:01.399) 0:01:17.755 ******* 2026-02-02 04:27:08.667684 | orchestrator | [WARNING]: Skipped 2026-02-02 04:27:08.667689 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-02 04:27:08.667693 | orchestrator | due to this access issue: 2026-02-02 04:27:08.667697 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-02 04:27:08.667701 | orchestrator | not a directory 2026-02-02 04:27:08.667707 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 04:27:08.667712 | orchestrator | 2026-02-02 04:27:08.667715 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-02 04:27:08.667719 | orchestrator | Monday 02 February 2026 04:27:04 +0000 (0:00:01.115) 0:01:18.870 ******* 2026-02-02 04:27:08.667723 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:27:08.667756 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:27:08.667761 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:27:08.667766 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:27:08.667770 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:27:08.667774 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:27:08.667778 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:27:08.667783 | orchestrator | 2026-02-02 04:27:08.667787 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-02 04:27:08.667791 | orchestrator | Monday 02 February 2026 04:27:05 +0000 (0:00:00.947) 0:01:19.817 ******* 2026-02-02 04:27:08.667796 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:27:08.667800 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:27:08.667804 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:27:08.667809 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:27:08.667813 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:27:08.667817 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:27:08.667822 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:27:08.667826 | orchestrator | 2026-02-02 04:27:08.667831 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-02 04:27:08.667840 | orchestrator | Monday 02 February 2026 04:27:06 +0000 (0:00:00.882) 0:01:20.699 ******* 2026-02-02 04:27:08.667852 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-02 04:27:10.315613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:27:10.315825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:27:10.315858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:27:10.315878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:27:10.315918 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:27:10.315938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:27:10.315987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:27:10.316036 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-02 04:27:10.316056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:27:10.316077 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:27:10.316099 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:27:10.316119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:27:10.316147 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:27:10.316179 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:27:10.316199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:27:10.316234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:27:12.252789 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 04:27:12.252904 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-02 04:27:12.252939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:27:12.252953 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 04:27:12.252985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-02 04:27:12.252998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:27:12.253028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:27:12.253041 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:27:12.253052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-02 04:27:12.253064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:27:12.253081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:27:12.253103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 04:27:12.253115 | orchestrator | 2026-02-02 04:27:12.253129 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-02 04:27:12.253142 | orchestrator | Monday 02 February 2026 04:27:10 +0000 (0:00:03.963) 0:01:24.662 ******* 2026-02-02 04:27:12.253152 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-02 04:27:12.253164 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:27:12.253175 | orchestrator | 2026-02-02 04:27:12.253186 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 04:27:12.253198 | orchestrator | Monday 02 February 2026 04:27:11 +0000 (0:00:01.220) 0:01:25.883 ******* 2026-02-02 04:27:12.253209 | orchestrator | 2026-02-02 04:27:12.253220 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 04:27:12.253231 | orchestrator | Monday 02 February 2026 04:27:11 +0000 (0:00:00.245) 0:01:26.128 ******* 2026-02-02 04:27:12.253242 | orchestrator | 2026-02-02 04:27:12.253253 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 04:27:12.253264 | orchestrator | Monday 02 February 2026 04:27:11 +0000 (0:00:00.073) 0:01:26.202 ******* 2026-02-02 04:27:12.253277 | orchestrator | 2026-02-02 04:27:12.253290 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 04:27:12.253302 | orchestrator | Monday 02 February 2026 04:27:11 +0000 (0:00:00.083) 0:01:26.285 ******* 2026-02-02 04:27:12.253315 | orchestrator | 2026-02-02 04:27:12.253328 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 04:27:12.253341 | orchestrator | Monday 02 February 2026 04:27:11 +0000 (0:00:00.066) 0:01:26.352 ******* 2026-02-02 04:27:12.253354 | orchestrator | 2026-02-02 04:27:12.253367 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 04:27:12.253379 | orchestrator | Monday 02 February 2026 04:27:12 +0000 (0:00:00.070) 0:01:26.423 ******* 2026-02-02 04:27:12.253392 | orchestrator | 2026-02-02 04:27:12.253405 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-02 04:27:12.253426 | orchestrator | Monday 02 February 2026 04:27:12 +0000 (0:00:00.066) 0:01:26.490 ******* 2026-02-02 04:28:59.758304 | orchestrator | 2026-02-02 04:28:59.758405 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-02 04:28:59.758418 | orchestrator | Monday 02 February 2026 04:27:12 +0000 (0:00:00.092) 0:01:26.583 ******* 2026-02-02 04:28:59.758427 | orchestrator | changed: [testbed-manager] 2026-02-02 04:28:59.758436 | orchestrator | 2026-02-02 04:28:59.758444 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-02 04:28:59.758452 | orchestrator | Monday 02 February 2026 04:27:31 +0000 (0:00:19.463) 0:01:46.046 ******* 2026-02-02 04:28:59.758461 | orchestrator | changed: [testbed-manager] 2026-02-02 04:28:59.758469 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:28:59.758477 | orchestrator | changed: [testbed-node-3] 2026-02-02 04:28:59.758485 | orchestrator | changed: [testbed-node-5] 2026-02-02 04:28:59.758492 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:28:59.758500 | orchestrator | changed: [testbed-node-4] 2026-02-02 04:28:59.758509 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:28:59.758517 | orchestrator | 2026-02-02 04:28:59.758525 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-02 04:28:59.758533 | orchestrator | Monday 02 February 2026 04:27:45 +0000 (0:00:13.902) 0:01:59.948 ******* 2026-02-02 04:28:59.758563 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:28:59.758571 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:28:59.758579 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:28:59.758587 | orchestrator | 2026-02-02 04:28:59.758595 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-02 04:28:59.758604 | orchestrator | Monday 02 February 2026 04:27:56 +0000 (0:00:10.650) 0:02:10.598 ******* 2026-02-02 04:28:59.758612 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:28:59.758620 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:28:59.758628 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:28:59.758635 | orchestrator | 2026-02-02 04:28:59.758643 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-02 04:28:59.758651 | orchestrator | Monday 02 February 2026 04:28:06 +0000 (0:00:10.393) 0:02:20.991 ******* 2026-02-02 04:28:59.758659 | orchestrator | changed: [testbed-manager] 2026-02-02 04:28:59.758667 | orchestrator | changed: [testbed-node-5] 2026-02-02 04:28:59.758675 | orchestrator | changed: [testbed-node-3] 2026-02-02 04:28:59.758683 | orchestrator | changed: [testbed-node-4] 2026-02-02 04:28:59.758691 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:28:59.758698 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:28:59.758706 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:28:59.758714 | orchestrator | 2026-02-02 04:28:59.758722 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-02 04:28:59.758730 | orchestrator | Monday 02 February 2026 04:28:20 +0000 (0:00:13.773) 0:02:34.765 ******* 2026-02-02 04:28:59.758737 | orchestrator | changed: [testbed-manager] 2026-02-02 04:28:59.758745 | orchestrator | 2026-02-02 04:28:59.758766 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-02 04:28:59.758775 | orchestrator | Monday 02 February 2026 04:28:33 +0000 (0:00:13.039) 0:02:47.804 ******* 2026-02-02 04:28:59.758782 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:28:59.758790 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:28:59.758798 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:28:59.758806 | orchestrator | 2026-02-02 04:28:59.758814 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-02 04:28:59.758822 | orchestrator | Monday 02 February 2026 04:28:43 +0000 (0:00:10.198) 0:02:58.002 ******* 2026-02-02 04:28:59.758830 | orchestrator | changed: [testbed-manager] 2026-02-02 04:28:59.758837 | orchestrator | 2026-02-02 04:28:59.758845 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-02 04:28:59.758854 | orchestrator | Monday 02 February 2026 04:28:48 +0000 (0:00:05.299) 0:03:03.302 ******* 2026-02-02 04:28:59.758893 | orchestrator | changed: [testbed-node-4] 2026-02-02 04:28:59.758907 | orchestrator | changed: [testbed-node-3] 2026-02-02 04:28:59.758920 | orchestrator | changed: [testbed-node-5] 2026-02-02 04:28:59.758933 | orchestrator | 2026-02-02 04:28:59.758947 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:28:59.758962 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-02 04:28:59.758979 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-02 04:28:59.758992 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-02 04:28:59.759006 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-02 04:28:59.759017 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-02 04:28:59.759026 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-02 04:28:59.759045 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-02 04:28:59.759055 | orchestrator | 2026-02-02 04:28:59.759065 | orchestrator | 2026-02-02 04:28:59.759074 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:28:59.759084 | orchestrator | Monday 02 February 2026 04:28:59 +0000 (0:00:10.211) 0:03:13.513 ******* 2026-02-02 04:28:59.759093 | orchestrator | =============================================================================== 2026-02-02 04:28:59.759103 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.33s 2026-02-02 04:28:59.759132 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 19.46s 2026-02-02 04:28:59.759141 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.78s 2026-02-02 04:28:59.759150 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.90s 2026-02-02 04:28:59.759159 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.77s 2026-02-02 04:28:59.759168 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.04s 2026-02-02 04:28:59.759177 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.65s 2026-02-02 04:28:59.759187 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.39s 2026-02-02 04:28:59.759196 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.21s 2026-02-02 04:28:59.759205 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.20s 2026-02-02 04:28:59.759214 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.18s 2026-02-02 04:28:59.759224 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.30s 2026-02-02 04:28:59.759231 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.09s 2026-02-02 04:28:59.759239 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.96s 2026-02-02 04:28:59.759247 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.89s 2026-02-02 04:28:59.759255 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.79s 2026-02-02 04:28:59.759263 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.38s 2026-02-02 04:28:59.759271 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.10s 2026-02-02 04:28:59.759279 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.88s 2026-02-02 04:28:59.759286 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.84s 2026-02-02 04:29:04.456577 | orchestrator | 2026-02-02 04:29:04 | INFO  | Task 1042a08e-f6ae-48a4-9929-f421a7b88163 (grafana) was prepared for execution. 2026-02-02 04:29:04.456679 | orchestrator | 2026-02-02 04:29:04 | INFO  | It takes a moment until task 1042a08e-f6ae-48a4-9929-f421a7b88163 (grafana) has been started and output is visible here. 2026-02-02 04:29:14.165021 | orchestrator | 2026-02-02 04:29:14.165148 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:29:14.165194 | orchestrator | 2026-02-02 04:29:14.165217 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:29:14.165229 | orchestrator | Monday 02 February 2026 04:29:08 +0000 (0:00:00.261) 0:00:00.261 ******* 2026-02-02 04:29:14.165242 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:29:14.165254 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:29:14.165265 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:29:14.165276 | orchestrator | 2026-02-02 04:29:14.165287 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:29:14.165299 | orchestrator | Monday 02 February 2026 04:29:09 +0000 (0:00:00.301) 0:00:00.562 ******* 2026-02-02 04:29:14.165333 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-02 04:29:14.165345 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-02 04:29:14.165356 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-02 04:29:14.165368 | orchestrator | 2026-02-02 04:29:14.165379 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-02 04:29:14.165390 | orchestrator | 2026-02-02 04:29:14.165401 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-02 04:29:14.165412 | orchestrator | Monday 02 February 2026 04:29:09 +0000 (0:00:00.447) 0:00:01.009 ******* 2026-02-02 04:29:14.165424 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:29:14.165436 | orchestrator | 2026-02-02 04:29:14.165447 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-02 04:29:14.165458 | orchestrator | Monday 02 February 2026 04:29:10 +0000 (0:00:00.553) 0:00:01.563 ******* 2026-02-02 04:29:14.165476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:29:14.165494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:29:14.165507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:29:14.165520 | orchestrator | 2026-02-02 04:29:14.165533 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-02 04:29:14.165547 | orchestrator | Monday 02 February 2026 04:29:10 +0000 (0:00:00.839) 0:00:02.402 ******* 2026-02-02 04:29:14.165559 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-02 04:29:14.165572 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-02 04:29:14.165585 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:29:14.165598 | orchestrator | 2026-02-02 04:29:14.165611 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-02 04:29:14.165624 | orchestrator | Monday 02 February 2026 04:29:11 +0000 (0:00:00.903) 0:00:03.306 ******* 2026-02-02 04:29:14.165637 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:29:14.165657 | orchestrator | 2026-02-02 04:29:14.165670 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-02 04:29:14.165683 | orchestrator | Monday 02 February 2026 04:29:12 +0000 (0:00:00.571) 0:00:03.877 ******* 2026-02-02 04:29:14.165724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:29:14.165739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:29:14.165753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:29:14.165766 | orchestrator | 2026-02-02 04:29:14.165779 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-02 04:29:14.165792 | orchestrator | Monday 02 February 2026 04:29:13 +0000 (0:00:01.262) 0:00:05.140 ******* 2026-02-02 04:29:14.165806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-02 04:29:14.165819 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:29:14.165834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-02 04:29:14.165852 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:29:14.165901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-02 04:29:20.969325 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:29:20.969442 | orchestrator | 2026-02-02 04:29:20.969459 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-02 04:29:20.969473 | orchestrator | Monday 02 February 2026 04:29:14 +0000 (0:00:00.549) 0:00:05.689 ******* 2026-02-02 04:29:20.969492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-02 04:29:20.969514 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:29:20.969534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-02 04:29:20.969554 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:29:20.969576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-02 04:29:20.969596 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:29:20.969615 | orchestrator | 2026-02-02 04:29:20.969628 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-02 04:29:20.969639 | orchestrator | Monday 02 February 2026 04:29:14 +0000 (0:00:00.672) 0:00:06.362 ******* 2026-02-02 04:29:20.969651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:29:20.969704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:29:20.969736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:29:20.969749 | orchestrator | 2026-02-02 04:29:20.969760 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-02 04:29:20.969771 | orchestrator | Monday 02 February 2026 04:29:16 +0000 (0:00:01.249) 0:00:07.611 ******* 2026-02-02 04:29:20.969782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:29:20.969794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:29:20.969805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:29:20.969824 | orchestrator | 2026-02-02 04:29:20.969835 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-02 04:29:20.969848 | orchestrator | Monday 02 February 2026 04:29:17 +0000 (0:00:01.649) 0:00:09.261 ******* 2026-02-02 04:29:20.969861 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:29:20.969874 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:29:20.969917 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:29:20.969931 | orchestrator | 2026-02-02 04:29:20.969943 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-02 04:29:20.969956 | orchestrator | Monday 02 February 2026 04:29:18 +0000 (0:00:00.307) 0:00:09.568 ******* 2026-02-02 04:29:20.969968 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-02 04:29:20.969981 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-02 04:29:20.969994 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-02 04:29:20.970006 | orchestrator | 2026-02-02 04:29:20.970082 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-02 04:29:20.970096 | orchestrator | Monday 02 February 2026 04:29:19 +0000 (0:00:01.240) 0:00:10.809 ******* 2026-02-02 04:29:20.970109 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-02 04:29:20.970133 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-02 04:29:20.970152 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-02 04:29:20.970165 | orchestrator | 2026-02-02 04:29:20.970178 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-02 04:29:20.970202 | orchestrator | Monday 02 February 2026 04:29:20 +0000 (0:00:01.679) 0:00:12.488 ******* 2026-02-02 04:29:27.262201 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:29:27.262302 | orchestrator | 2026-02-02 04:29:27.262315 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-02 04:29:27.262326 | orchestrator | Monday 02 February 2026 04:29:21 +0000 (0:00:00.774) 0:00:13.262 ******* 2026-02-02 04:29:27.262335 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-02 04:29:27.262344 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-02 04:29:27.262352 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:29:27.262361 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:29:27.262370 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:29:27.262377 | orchestrator | 2026-02-02 04:29:27.262386 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-02 04:29:27.262394 | orchestrator | Monday 02 February 2026 04:29:22 +0000 (0:00:00.672) 0:00:13.935 ******* 2026-02-02 04:29:27.262402 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:29:27.262410 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:29:27.262418 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:29:27.262426 | orchestrator | 2026-02-02 04:29:27.262434 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-02 04:29:27.262443 | orchestrator | Monday 02 February 2026 04:29:22 +0000 (0:00:00.331) 0:00:14.266 ******* 2026-02-02 04:29:27.262454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1087662, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0655563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:27.262489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1087662, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0655563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:27.262499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1087662, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0655563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:27.262508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1087709, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.092557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:27.262544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1087709, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.092557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:27.262553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1087709, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.092557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:27.262562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1087670, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0715566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:27.262578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1087670, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0715566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:27.262586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1087670, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0715566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:27.262594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1087713, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.095557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:27.262607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1087713, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.095557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:27.262621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1087713, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.095557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:31.034413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1087689, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0866299, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:31.034558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1087689, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0866299, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:31.034576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1087689, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0866299, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:31.034589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1087700, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0905569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:31.034602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1087700, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0905569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:31.034628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1087700, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0905569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:31.034659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1087660, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.063277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:31.034680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1087660, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.063277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:31.034692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1087660, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.063277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:31.034703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1087668, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0685563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:31.034715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1087668, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0685563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:31.034731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1087668, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0685563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:31.034762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1087671, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0715566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:34.832063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1087671, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0715566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:34.832199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1087671, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0715566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:34.832215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1087695, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0885568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:34.832227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1087695, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0885568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:34.832239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1087695, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0885568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:34.832266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1087707, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.092557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:34.832298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1087707, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.092557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:34.832317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1087707, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.092557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:34.832329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1087669, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0705564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:34.832340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1087669, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0705564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:34.832351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1087669, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0705564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:34.832368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1087699, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0895567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:34.832387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1087699, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0895567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:38.593540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1087699, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0895567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:38.593650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1087691, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0876389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:38.593666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1087691, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0876389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:38.593678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1087691, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0876389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:38.593707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1087685, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0862331, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:38.593721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1087685, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0862331, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:38.593774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1087685, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0862331, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:38.593787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1087682, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0835567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:38.593799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1087682, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0835567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:38.593810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1087682, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0835567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:38.593821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1087696, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0891814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:38.593838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1087696, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0891814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:38.593864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1087696, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0891814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:42.506229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1087672, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0825567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:42.506306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1087672, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0825567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:42.506314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1087672, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0825567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:42.506320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1087703, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0915568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:42.506339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1087703, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0915568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:42.506360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1087703, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0915568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:42.506377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1087802, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1299288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:42.506382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1087802, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1299288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:42.506387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1087802, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1299288, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:42.506392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1087734, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1056476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:42.506400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1087734, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1056476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:42.506416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1087734, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1056476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:42.506426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1087722, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0981455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:46.503571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1087722, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0981455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:46.503683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1087722, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0981455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:46.503700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1087751, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1083403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:46.503712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1087751, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1083403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:46.503763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1087751, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1083403, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:46.503777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1087717, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0961146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:46.503808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1087717, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0961146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:46.503820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1087717, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.0961146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:46.503832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1087778, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1195574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:46.503843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1087778, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1195574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:46.503868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1087778, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1195574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:46.503880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1087754, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.116343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:46.503959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1087754, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.116343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:50.117844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1087754, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.116343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:50.117995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1087782, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1205573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:50.118011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1087782, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1205573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:50.118115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1087782, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1205573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:50.118127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1087796, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1276488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:50.118137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1087796, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1276488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:50.118163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1087773, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1180842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:50.118173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1087796, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1276488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:50.118182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1087773, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1180842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:50.118202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1087748, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.107557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:50.118212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1087773, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1180842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:50.118221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1087748, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.107557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:50.118238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1087733, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.101557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:53.879484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1087748, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.107557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:53.879571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1087733, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.101557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:53.879610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1087744, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1067917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:53.879619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1087733, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.101557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:53.879625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1087744, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1067917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:53.879632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1087727, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.100557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:53.879651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1087744, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1067917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:53.879658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1087727, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.100557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:53.879671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1087750, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.107557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:53.879682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1087727, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.100557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:53.879689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1087750, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.107557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:53.879695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1087791, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1270335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:53.879708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1087750, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.107557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:57.759633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1087791, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1270335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:57.759726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1087786, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1225574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:57.759744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1087791, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1270335, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:57.759749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1087786, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1225574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:57.759753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1087719, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.096557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:57.759757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1087786, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1225574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:57.759771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1087719, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.096557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:57.759780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1087721, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.096557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:57.759786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1087719, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.096557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:57.759790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1087721, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.096557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:57.759794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1087769, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1165574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:57.759801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1087769, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1165574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:29:57.759809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1087721, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.096557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:31:34.345091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1087784, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1205573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:31:34.345226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1087784, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1205573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:31:34.345243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1087769, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1165574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:31:34.345257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1087784, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769999375.1205573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-02 04:31:34.345269 | orchestrator | 2026-02-02 04:31:34.345283 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-02 04:31:34.345296 | orchestrator | Monday 02 February 2026 04:29:59 +0000 (0:00:36.904) 0:00:51.171 ******* 2026-02-02 04:31:34.345307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:31:34.345362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:31:34.345382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-02 04:31:34.345401 | orchestrator | 2026-02-02 04:31:34.345412 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-02 04:31:34.345423 | orchestrator | Monday 02 February 2026 04:30:00 +0000 (0:00:00.975) 0:00:52.146 ******* 2026-02-02 04:31:34.345434 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:31:34.345446 | orchestrator | 2026-02-02 04:31:34.345457 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-02 04:31:34.345468 | orchestrator | Monday 02 February 2026 04:30:02 +0000 (0:00:02.142) 0:00:54.288 ******* 2026-02-02 04:31:34.345484 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:31:34.345495 | orchestrator | 2026-02-02 04:31:34.345506 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-02 04:31:34.345517 | orchestrator | Monday 02 February 2026 04:30:05 +0000 (0:00:02.246) 0:00:56.534 ******* 2026-02-02 04:31:34.345528 | orchestrator | 2026-02-02 04:31:34.345538 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-02 04:31:34.345549 | orchestrator | Monday 02 February 2026 04:30:05 +0000 (0:00:00.073) 0:00:56.608 ******* 2026-02-02 04:31:34.345606 | orchestrator | 2026-02-02 04:31:34.345622 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-02 04:31:34.345634 | orchestrator | Monday 02 February 2026 04:30:05 +0000 (0:00:00.071) 0:00:56.679 ******* 2026-02-02 04:31:34.345646 | orchestrator | 2026-02-02 04:31:34.345659 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-02 04:31:34.345671 | orchestrator | Monday 02 February 2026 04:30:05 +0000 (0:00:00.070) 0:00:56.750 ******* 2026-02-02 04:31:34.345684 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:31:34.345698 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:31:34.345711 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:31:34.345723 | orchestrator | 2026-02-02 04:31:34.345736 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-02 04:31:34.345749 | orchestrator | Monday 02 February 2026 04:30:07 +0000 (0:00:02.111) 0:00:58.862 ******* 2026-02-02 04:31:34.345762 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:31:34.345774 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:31:34.345787 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-02 04:31:34.345800 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-02 04:31:34.345821 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-02 04:31:34.345834 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-02 04:31:34.345847 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:31:34.345860 | orchestrator | 2026-02-02 04:31:34.345873 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-02 04:31:34.345886 | orchestrator | Monday 02 February 2026 04:30:57 +0000 (0:00:49.978) 0:01:48.841 ******* 2026-02-02 04:31:34.345899 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:31:34.345912 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:31:34.345924 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:31:34.345935 | orchestrator | 2026-02-02 04:31:34.345946 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-02 04:31:34.345957 | orchestrator | Monday 02 February 2026 04:31:29 +0000 (0:00:32.084) 0:02:20.925 ******* 2026-02-02 04:31:34.345968 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:31:34.345978 | orchestrator | 2026-02-02 04:31:34.345989 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-02 04:31:34.346071 | orchestrator | Monday 02 February 2026 04:31:31 +0000 (0:00:02.095) 0:02:23.020 ******* 2026-02-02 04:31:34.346085 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:31:34.346096 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:31:34.346107 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:31:34.346117 | orchestrator | 2026-02-02 04:31:34.346128 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-02 04:31:34.346139 | orchestrator | Monday 02 February 2026 04:31:31 +0000 (0:00:00.317) 0:02:23.337 ******* 2026-02-02 04:31:34.346152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-02 04:31:34.346176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-02 04:31:35.000316 | orchestrator | 2026-02-02 04:31:35.000421 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-02 04:31:35.000438 | orchestrator | Monday 02 February 2026 04:31:34 +0000 (0:00:02.524) 0:02:25.862 ******* 2026-02-02 04:31:35.000450 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:31:35.000463 | orchestrator | 2026-02-02 04:31:35.000474 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:31:35.000486 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 04:31:35.000498 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 04:31:35.000509 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 04:31:35.000520 | orchestrator | 2026-02-02 04:31:35.000531 | orchestrator | 2026-02-02 04:31:35.000542 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:31:35.000553 | orchestrator | Monday 02 February 2026 04:31:34 +0000 (0:00:00.284) 0:02:26.147 ******* 2026-02-02 04:31:35.000564 | orchestrator | =============================================================================== 2026-02-02 04:31:35.000595 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 49.98s 2026-02-02 04:31:35.000607 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.90s 2026-02-02 04:31:35.000639 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.08s 2026-02-02 04:31:35.000650 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.52s 2026-02-02 04:31:35.000661 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.25s 2026-02-02 04:31:35.000672 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.14s 2026-02-02 04:31:35.000683 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.11s 2026-02-02 04:31:35.000694 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.10s 2026-02-02 04:31:35.000705 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.68s 2026-02-02 04:31:35.000715 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.65s 2026-02-02 04:31:35.000726 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.26s 2026-02-02 04:31:35.000737 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.25s 2026-02-02 04:31:35.000747 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.24s 2026-02-02 04:31:35.000758 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.98s 2026-02-02 04:31:35.000768 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.90s 2026-02-02 04:31:35.000779 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.84s 2026-02-02 04:31:35.000790 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.77s 2026-02-02 04:31:35.000800 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.67s 2026-02-02 04:31:35.000811 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.67s 2026-02-02 04:31:35.000822 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.57s 2026-02-02 04:31:35.342490 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-02 04:31:35.350488 | orchestrator | + set -e 2026-02-02 04:31:35.350603 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 04:31:35.350766 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 04:31:35.350792 | orchestrator | ++ INTERACTIVE=false 2026-02-02 04:31:35.350810 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 04:31:35.350842 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 04:31:35.350858 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 04:31:35.350875 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 04:31:35.350890 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 04:31:35.350907 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 04:31:35.350923 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 04:31:35.350939 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 04:31:35.350954 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 04:31:35.350969 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 04:31:35.350986 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 04:31:35.351032 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-02 04:31:35.351050 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-02 04:31:35.351067 | orchestrator | ++ export ARA=false 2026-02-02 04:31:35.351084 | orchestrator | ++ ARA=false 2026-02-02 04:31:35.351100 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 04:31:35.351239 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 04:31:35.351261 | orchestrator | ++ export TEMPEST=false 2026-02-02 04:31:35.351277 | orchestrator | ++ TEMPEST=false 2026-02-02 04:31:35.351294 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 04:31:35.351310 | orchestrator | ++ IS_ZUUL=true 2026-02-02 04:31:35.351327 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 04:31:35.351343 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 04:31:35.351358 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 04:31:35.351380 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 04:31:35.351401 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 04:31:35.351416 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 04:31:35.351431 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 04:31:35.351446 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 04:31:35.351461 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 04:31:35.351475 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 04:31:35.351528 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-02 04:31:35.391930 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-02 04:31:35.392042 | orchestrator | + osism apply clusterapi 2026-02-02 04:31:37.397368 | orchestrator | 2026-02-02 04:31:37 | INFO  | Task 4fc5bf33-adca-4f1c-afd0-0d9283a0d8ce (clusterapi) was prepared for execution. 2026-02-02 04:31:37.397472 | orchestrator | 2026-02-02 04:31:37 | INFO  | It takes a moment until task 4fc5bf33-adca-4f1c-afd0-0d9283a0d8ce (clusterapi) has been started and output is visible here. 2026-02-02 04:32:31.956564 | orchestrator | 2026-02-02 04:32:31.956650 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-02 04:32:31.956658 | orchestrator | 2026-02-02 04:32:31.956662 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-02 04:32:31.956678 | orchestrator | Monday 02 February 2026 04:31:41 +0000 (0:00:00.222) 0:00:00.222 ******* 2026-02-02 04:32:31.956683 | orchestrator | included: cert_manager for testbed-manager 2026-02-02 04:32:31.956687 | orchestrator | 2026-02-02 04:32:31.956692 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-02 04:32:31.956696 | orchestrator | Monday 02 February 2026 04:31:42 +0000 (0:00:00.238) 0:00:00.460 ******* 2026-02-02 04:32:31.956700 | orchestrator | changed: [testbed-manager] 2026-02-02 04:32:31.956705 | orchestrator | 2026-02-02 04:32:31.956709 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-02 04:32:31.956713 | orchestrator | Monday 02 February 2026 04:31:47 +0000 (0:00:05.443) 0:00:05.904 ******* 2026-02-02 04:32:31.956717 | orchestrator | changed: [testbed-manager] 2026-02-02 04:32:31.956720 | orchestrator | 2026-02-02 04:32:31.956724 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-02 04:32:31.956728 | orchestrator | 2026-02-02 04:32:31.956732 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-02 04:32:31.956736 | orchestrator | Monday 02 February 2026 04:32:11 +0000 (0:00:23.834) 0:00:29.739 ******* 2026-02-02 04:32:31.956740 | orchestrator | ok: [testbed-manager] 2026-02-02 04:32:31.956744 | orchestrator | 2026-02-02 04:32:31.956748 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-02 04:32:31.956769 | orchestrator | Monday 02 February 2026 04:32:12 +0000 (0:00:01.144) 0:00:30.884 ******* 2026-02-02 04:32:31.956773 | orchestrator | ok: [testbed-manager] 2026-02-02 04:32:31.956777 | orchestrator | 2026-02-02 04:32:31.956781 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-02 04:32:31.956785 | orchestrator | Monday 02 February 2026 04:32:12 +0000 (0:00:00.128) 0:00:31.012 ******* 2026-02-02 04:32:31.956789 | orchestrator | ok: [testbed-manager] 2026-02-02 04:32:31.956793 | orchestrator | 2026-02-02 04:32:31.956797 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-02 04:32:31.956801 | orchestrator | Monday 02 February 2026 04:32:29 +0000 (0:00:16.517) 0:00:47.529 ******* 2026-02-02 04:32:31.956804 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:32:31.956808 | orchestrator | 2026-02-02 04:32:31.956812 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-02 04:32:31.956816 | orchestrator | Monday 02 February 2026 04:32:29 +0000 (0:00:00.183) 0:00:47.713 ******* 2026-02-02 04:32:31.956820 | orchestrator | changed: [testbed-manager] 2026-02-02 04:32:31.956824 | orchestrator | 2026-02-02 04:32:31.956827 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:32:31.956832 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 04:32:31.956847 | orchestrator | 2026-02-02 04:32:31.956854 | orchestrator | 2026-02-02 04:32:31.956858 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:32:31.956862 | orchestrator | Monday 02 February 2026 04:32:31 +0000 (0:00:02.207) 0:00:49.920 ******* 2026-02-02 04:32:31.956873 | orchestrator | =============================================================================== 2026-02-02 04:32:31.956893 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 23.83s 2026-02-02 04:32:31.956897 | orchestrator | Initialize the CAPI management cluster --------------------------------- 16.52s 2026-02-02 04:32:31.956901 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.44s 2026-02-02 04:32:31.956904 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.21s 2026-02-02 04:32:31.956908 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.14s 2026-02-02 04:32:31.956912 | orchestrator | Include cert_manager role ----------------------------------------------- 0.24s 2026-02-02 04:32:31.956916 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.18s 2026-02-02 04:32:31.956920 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.13s 2026-02-02 04:32:32.325542 | orchestrator | + osism apply magnum 2026-02-02 04:32:34.525019 | orchestrator | 2026-02-02 04:32:34 | INFO  | Task 56eb38d7-9071-419b-bc2c-027ead9cf44b (magnum) was prepared for execution. 2026-02-02 04:32:34.525164 | orchestrator | 2026-02-02 04:32:34 | INFO  | It takes a moment until task 56eb38d7-9071-419b-bc2c-027ead9cf44b (magnum) has been started and output is visible here. 2026-02-02 04:33:16.705186 | orchestrator | 2026-02-02 04:33:16.705318 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:33:16.705337 | orchestrator | 2026-02-02 04:33:16.705349 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:33:16.705362 | orchestrator | Monday 02 February 2026 04:32:39 +0000 (0:00:00.323) 0:00:00.323 ******* 2026-02-02 04:33:16.705374 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:33:16.705386 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:33:16.705398 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:33:16.705409 | orchestrator | 2026-02-02 04:33:16.705420 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:33:16.705431 | orchestrator | Monday 02 February 2026 04:32:39 +0000 (0:00:00.339) 0:00:00.662 ******* 2026-02-02 04:33:16.705443 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-02 04:33:16.705454 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-02 04:33:16.705466 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-02 04:33:16.705477 | orchestrator | 2026-02-02 04:33:16.705488 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-02 04:33:16.705499 | orchestrator | 2026-02-02 04:33:16.705510 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-02 04:33:16.705565 | orchestrator | Monday 02 February 2026 04:32:40 +0000 (0:00:00.460) 0:00:01.122 ******* 2026-02-02 04:33:16.705588 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:33:16.705608 | orchestrator | 2026-02-02 04:33:16.705627 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-02 04:33:16.705647 | orchestrator | Monday 02 February 2026 04:32:40 +0000 (0:00:00.581) 0:00:01.704 ******* 2026-02-02 04:33:16.705668 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-02 04:33:16.705688 | orchestrator | 2026-02-02 04:33:16.705702 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-02 04:33:16.705715 | orchestrator | Monday 02 February 2026 04:32:44 +0000 (0:00:03.464) 0:00:05.169 ******* 2026-02-02 04:33:16.705728 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-02 04:33:16.705741 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-02 04:33:16.705754 | orchestrator | 2026-02-02 04:33:16.705767 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-02 04:33:16.705780 | orchestrator | Monday 02 February 2026 04:32:50 +0000 (0:00:06.367) 0:00:11.536 ******* 2026-02-02 04:33:16.705793 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-02 04:33:16.705835 | orchestrator | 2026-02-02 04:33:16.705849 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-02 04:33:16.705876 | orchestrator | Monday 02 February 2026 04:32:53 +0000 (0:00:03.291) 0:00:14.828 ******* 2026-02-02 04:33:16.705889 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-02 04:33:16.705902 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-02 04:33:16.705915 | orchestrator | 2026-02-02 04:33:16.705927 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-02 04:33:16.705940 | orchestrator | Monday 02 February 2026 04:32:57 +0000 (0:00:03.924) 0:00:18.752 ******* 2026-02-02 04:33:16.705952 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-02 04:33:16.705966 | orchestrator | 2026-02-02 04:33:16.705978 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-02 04:33:16.705991 | orchestrator | Monday 02 February 2026 04:33:00 +0000 (0:00:03.219) 0:00:21.972 ******* 2026-02-02 04:33:16.706004 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-02 04:33:16.706069 | orchestrator | 2026-02-02 04:33:16.706112 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-02 04:33:16.706124 | orchestrator | Monday 02 February 2026 04:33:04 +0000 (0:00:03.608) 0:00:25.581 ******* 2026-02-02 04:33:16.706135 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:33:16.706146 | orchestrator | 2026-02-02 04:33:16.706157 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-02 04:33:16.706168 | orchestrator | Monday 02 February 2026 04:33:07 +0000 (0:00:03.245) 0:00:28.826 ******* 2026-02-02 04:33:16.706178 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:33:16.706189 | orchestrator | 2026-02-02 04:33:16.706200 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-02 04:33:16.706211 | orchestrator | Monday 02 February 2026 04:33:11 +0000 (0:00:03.917) 0:00:32.743 ******* 2026-02-02 04:33:16.706222 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:33:16.706233 | orchestrator | 2026-02-02 04:33:16.706245 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-02 04:33:16.706256 | orchestrator | Monday 02 February 2026 04:33:15 +0000 (0:00:03.460) 0:00:36.204 ******* 2026-02-02 04:33:16.706291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:16.706307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:16.706335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:16.706348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:16.706361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:16.706379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:24.255563 | orchestrator | 2026-02-02 04:33:24.255668 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-02 04:33:24.255684 | orchestrator | Monday 02 February 2026 04:33:16 +0000 (0:00:01.566) 0:00:37.771 ******* 2026-02-02 04:33:24.255695 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:33:24.255706 | orchestrator | 2026-02-02 04:33:24.255716 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-02 04:33:24.255727 | orchestrator | Monday 02 February 2026 04:33:16 +0000 (0:00:00.164) 0:00:37.935 ******* 2026-02-02 04:33:24.255736 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:33:24.255746 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:33:24.255779 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:33:24.255789 | orchestrator | 2026-02-02 04:33:24.255799 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-02 04:33:24.255808 | orchestrator | Monday 02 February 2026 04:33:17 +0000 (0:00:00.291) 0:00:38.227 ******* 2026-02-02 04:33:24.255818 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:33:24.255828 | orchestrator | 2026-02-02 04:33:24.255838 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-02 04:33:24.255848 | orchestrator | Monday 02 February 2026 04:33:17 +0000 (0:00:00.859) 0:00:39.086 ******* 2026-02-02 04:33:24.255860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:24.255888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:24.255899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:24.255928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:24.255952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:24.255963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:24.255973 | orchestrator | 2026-02-02 04:33:24.255995 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-02 04:33:24.256006 | orchestrator | Monday 02 February 2026 04:33:20 +0000 (0:00:02.375) 0:00:41.462 ******* 2026-02-02 04:33:24.256016 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:33:24.256027 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:33:24.256037 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:33:24.256047 | orchestrator | 2026-02-02 04:33:24.256056 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-02 04:33:24.256066 | orchestrator | Monday 02 February 2026 04:33:21 +0000 (0:00:00.643) 0:00:42.106 ******* 2026-02-02 04:33:24.256100 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:33:24.256115 | orchestrator | 2026-02-02 04:33:24.256127 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-02 04:33:24.256139 | orchestrator | Monday 02 February 2026 04:33:21 +0000 (0:00:00.638) 0:00:42.744 ******* 2026-02-02 04:33:24.256151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:24.256172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:25.116666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:25.116808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:25.116837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:25.116857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:25.116874 | orchestrator | 2026-02-02 04:33:25.116890 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-02 04:33:25.116902 | orchestrator | Monday 02 February 2026 04:33:24 +0000 (0:00:02.587) 0:00:45.332 ******* 2026-02-02 04:33:25.116952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-02 04:33:25.116965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 04:33:25.116975 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:33:25.116992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-02 04:33:25.117003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 04:33:25.117013 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:33:25.117023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-02 04:33:25.117049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 04:33:28.871934 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:33:28.872039 | orchestrator | 2026-02-02 04:33:28.872054 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-02 04:33:28.872068 | orchestrator | Monday 02 February 2026 04:33:25 +0000 (0:00:00.857) 0:00:46.189 ******* 2026-02-02 04:33:28.872160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-02 04:33:28.872204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 04:33:28.872218 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:33:28.872230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-02 04:33:28.872263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 04:33:28.872275 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:33:28.872307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-02 04:33:28.872320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 04:33:28.872331 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:33:28.872342 | orchestrator | 2026-02-02 04:33:28.872353 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-02 04:33:28.872370 | orchestrator | Monday 02 February 2026 04:33:26 +0000 (0:00:00.949) 0:00:47.139 ******* 2026-02-02 04:33:28.872382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:28.872403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:28.872423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:35.166625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:35.166732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:35.166744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:35.166771 | orchestrator | 2026-02-02 04:33:35.166782 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-02 04:33:35.166792 | orchestrator | Monday 02 February 2026 04:33:28 +0000 (0:00:02.807) 0:00:49.947 ******* 2026-02-02 04:33:35.166800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:35.166821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:35.166830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:35.166843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:35.166851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:35.166864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:35.166872 | orchestrator | 2026-02-02 04:33:35.166879 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-02 04:33:35.166887 | orchestrator | Monday 02 February 2026 04:33:34 +0000 (0:00:05.641) 0:00:55.588 ******* 2026-02-02 04:33:35.166901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-02 04:33:37.040291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 04:33:37.040395 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:33:37.040432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-02 04:33:37.040468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 04:33:37.040480 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:33:37.040492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-02 04:33:37.040520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 04:33:37.040532 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:33:37.040544 | orchestrator | 2026-02-02 04:33:37.040556 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-02 04:33:37.040569 | orchestrator | Monday 02 February 2026 04:33:35 +0000 (0:00:00.658) 0:00:56.247 ******* 2026-02-02 04:33:37.040587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:37.040608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:37.040620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-02 04:33:37.040631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:33:37.040652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:34:24.661426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-02 04:34:24.661569 | orchestrator | 2026-02-02 04:34:24.661585 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-02 04:34:24.661594 | orchestrator | Monday 02 February 2026 04:33:37 +0000 (0:00:01.868) 0:00:58.116 ******* 2026-02-02 04:34:24.661603 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:34:24.661611 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:34:24.661619 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:34:24.661626 | orchestrator | 2026-02-02 04:34:24.661633 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-02 04:34:24.661641 | orchestrator | Monday 02 February 2026 04:33:37 +0000 (0:00:00.527) 0:00:58.643 ******* 2026-02-02 04:34:24.661648 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:34:24.661655 | orchestrator | 2026-02-02 04:34:24.661663 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-02 04:34:24.661670 | orchestrator | Monday 02 February 2026 04:33:39 +0000 (0:00:02.007) 0:01:00.651 ******* 2026-02-02 04:34:24.661677 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:34:24.661684 | orchestrator | 2026-02-02 04:34:24.661692 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-02 04:34:24.661699 | orchestrator | Monday 02 February 2026 04:33:41 +0000 (0:00:02.152) 0:01:02.803 ******* 2026-02-02 04:34:24.661706 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:34:24.661713 | orchestrator | 2026-02-02 04:34:24.661721 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-02 04:34:24.661728 | orchestrator | Monday 02 February 2026 04:33:58 +0000 (0:00:16.363) 0:01:19.167 ******* 2026-02-02 04:34:24.661735 | orchestrator | 2026-02-02 04:34:24.661742 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-02 04:34:24.661750 | orchestrator | Monday 02 February 2026 04:33:58 +0000 (0:00:00.077) 0:01:19.245 ******* 2026-02-02 04:34:24.661757 | orchestrator | 2026-02-02 04:34:24.661764 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-02 04:34:24.661771 | orchestrator | Monday 02 February 2026 04:33:58 +0000 (0:00:00.073) 0:01:19.318 ******* 2026-02-02 04:34:24.661779 | orchestrator | 2026-02-02 04:34:24.661786 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-02 04:34:24.661793 | orchestrator | Monday 02 February 2026 04:33:58 +0000 (0:00:00.072) 0:01:19.391 ******* 2026-02-02 04:34:24.661801 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:34:24.661814 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:34:24.661825 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:34:24.661836 | orchestrator | 2026-02-02 04:34:24.661847 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-02 04:34:24.661857 | orchestrator | Monday 02 February 2026 04:34:13 +0000 (0:00:15.290) 0:01:34.681 ******* 2026-02-02 04:34:24.661869 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:34:24.661879 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:34:24.661891 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:34:24.661903 | orchestrator | 2026-02-02 04:34:24.661915 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:34:24.661927 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 04:34:24.661939 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 04:34:24.661951 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-02 04:34:24.661962 | orchestrator | 2026-02-02 04:34:24.661975 | orchestrator | 2026-02-02 04:34:24.661987 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:34:24.662009 | orchestrator | Monday 02 February 2026 04:34:24 +0000 (0:00:10.675) 0:01:45.357 ******* 2026-02-02 04:34:24.662082 | orchestrator | =============================================================================== 2026-02-02 04:34:24.662095 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.36s 2026-02-02 04:34:24.662104 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.29s 2026-02-02 04:34:24.662113 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.68s 2026-02-02 04:34:24.662147 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.37s 2026-02-02 04:34:24.662161 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.64s 2026-02-02 04:34:24.662175 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.92s 2026-02-02 04:34:24.662188 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.92s 2026-02-02 04:34:24.662215 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.61s 2026-02-02 04:34:24.662225 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.46s 2026-02-02 04:34:24.662234 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.46s 2026-02-02 04:34:24.662242 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.29s 2026-02-02 04:34:24.662249 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.25s 2026-02-02 04:34:24.662257 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.22s 2026-02-02 04:34:24.662264 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.81s 2026-02-02 04:34:24.662278 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.59s 2026-02-02 04:34:24.662285 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.38s 2026-02-02 04:34:24.662292 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.15s 2026-02-02 04:34:24.662299 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.01s 2026-02-02 04:34:24.662306 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.87s 2026-02-02 04:34:24.662314 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.57s 2026-02-02 04:34:25.470126 | orchestrator | ok: Runtime: 1:41:39.994050 2026-02-02 04:34:25.721920 | 2026-02-02 04:34:25.722097 | TASK [Deploy in a nutshell] 2026-02-02 04:34:26.269924 | orchestrator | skipping: Conditional result was False 2026-02-02 04:34:26.293336 | 2026-02-02 04:34:26.293490 | TASK [Bootstrap services] 2026-02-02 04:34:27.036918 | orchestrator | 2026-02-02 04:34:27.037038 | orchestrator | # BOOTSTRAP 2026-02-02 04:34:27.037047 | orchestrator | 2026-02-02 04:34:27.037071 | orchestrator | + set -e 2026-02-02 04:34:27.037076 | orchestrator | + echo 2026-02-02 04:34:27.037081 | orchestrator | + echo '# BOOTSTRAP' 2026-02-02 04:34:27.037088 | orchestrator | + echo 2026-02-02 04:34:27.037172 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-02 04:34:27.045816 | orchestrator | + set -e 2026-02-02 04:34:27.045851 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-02 04:34:29.229034 | orchestrator | 2026-02-02 04:34:29 | INFO  | It takes a moment until task 72987761-cfe2-4705-995a-93753e8de01c (flavor-manager) has been started and output is visible here. 2026-02-02 04:34:36.823871 | orchestrator | 2026-02-02 04:34:32 | INFO  | Flavor SCS-1L-1 created 2026-02-02 04:34:36.823997 | orchestrator | 2026-02-02 04:34:32 | INFO  | Flavor SCS-1L-1-5 created 2026-02-02 04:34:36.824019 | orchestrator | 2026-02-02 04:34:33 | INFO  | Flavor SCS-1V-2 created 2026-02-02 04:34:36.824032 | orchestrator | 2026-02-02 04:34:33 | INFO  | Flavor SCS-1V-2-5 created 2026-02-02 04:34:36.824044 | orchestrator | 2026-02-02 04:34:33 | INFO  | Flavor SCS-1V-4 created 2026-02-02 04:34:36.824055 | orchestrator | 2026-02-02 04:34:33 | INFO  | Flavor SCS-1V-4-10 created 2026-02-02 04:34:36.824066 | orchestrator | 2026-02-02 04:34:33 | INFO  | Flavor SCS-1V-8 created 2026-02-02 04:34:36.824078 | orchestrator | 2026-02-02 04:34:33 | INFO  | Flavor SCS-1V-8-20 created 2026-02-02 04:34:36.824096 | orchestrator | 2026-02-02 04:34:33 | INFO  | Flavor SCS-2V-4 created 2026-02-02 04:34:36.824107 | orchestrator | 2026-02-02 04:34:34 | INFO  | Flavor SCS-2V-4-10 created 2026-02-02 04:34:36.824119 | orchestrator | 2026-02-02 04:34:34 | INFO  | Flavor SCS-2V-8 created 2026-02-02 04:34:36.824162 | orchestrator | 2026-02-02 04:34:34 | INFO  | Flavor SCS-2V-8-20 created 2026-02-02 04:34:36.824175 | orchestrator | 2026-02-02 04:34:34 | INFO  | Flavor SCS-2V-16 created 2026-02-02 04:34:36.824187 | orchestrator | 2026-02-02 04:34:34 | INFO  | Flavor SCS-2V-16-50 created 2026-02-02 04:34:36.824198 | orchestrator | 2026-02-02 04:34:34 | INFO  | Flavor SCS-4V-8 created 2026-02-02 04:34:36.824209 | orchestrator | 2026-02-02 04:34:34 | INFO  | Flavor SCS-4V-8-20 created 2026-02-02 04:34:36.824220 | orchestrator | 2026-02-02 04:34:35 | INFO  | Flavor SCS-4V-16 created 2026-02-02 04:34:36.824231 | orchestrator | 2026-02-02 04:34:35 | INFO  | Flavor SCS-4V-16-50 created 2026-02-02 04:34:36.824243 | orchestrator | 2026-02-02 04:34:35 | INFO  | Flavor SCS-4V-32 created 2026-02-02 04:34:36.824254 | orchestrator | 2026-02-02 04:34:35 | INFO  | Flavor SCS-4V-32-100 created 2026-02-02 04:34:36.824265 | orchestrator | 2026-02-02 04:34:35 | INFO  | Flavor SCS-8V-16 created 2026-02-02 04:34:36.824276 | orchestrator | 2026-02-02 04:34:35 | INFO  | Flavor SCS-8V-16-50 created 2026-02-02 04:34:36.824288 | orchestrator | 2026-02-02 04:34:35 | INFO  | Flavor SCS-8V-32 created 2026-02-02 04:34:36.824299 | orchestrator | 2026-02-02 04:34:35 | INFO  | Flavor SCS-8V-32-100 created 2026-02-02 04:34:36.824324 | orchestrator | 2026-02-02 04:34:36 | INFO  | Flavor SCS-16V-32 created 2026-02-02 04:34:36.824336 | orchestrator | 2026-02-02 04:34:36 | INFO  | Flavor SCS-16V-32-100 created 2026-02-02 04:34:36.824347 | orchestrator | 2026-02-02 04:34:36 | INFO  | Flavor SCS-2V-4-20s created 2026-02-02 04:34:36.824357 | orchestrator | 2026-02-02 04:34:36 | INFO  | Flavor SCS-4V-8-50s created 2026-02-02 04:34:36.824369 | orchestrator | 2026-02-02 04:34:36 | INFO  | Flavor SCS-8V-32-100s created 2026-02-02 04:34:39.151525 | orchestrator | 2026-02-02 04:34:39 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-02 04:34:49.257410 | orchestrator | 2026-02-02 04:34:49 | INFO  | Task 04db5a3e-6bf5-44e6-b81e-056f7f19cbe8 (bootstrap-basic) was prepared for execution. 2026-02-02 04:34:49.257507 | orchestrator | 2026-02-02 04:34:49 | INFO  | It takes a moment until task 04db5a3e-6bf5-44e6-b81e-056f7f19cbe8 (bootstrap-basic) has been started and output is visible here. 2026-02-02 04:35:33.091509 | orchestrator | 2026-02-02 04:35:33.091622 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-02 04:35:33.091637 | orchestrator | 2026-02-02 04:35:33.091649 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 04:35:33.091659 | orchestrator | Monday 02 February 2026 04:34:54 +0000 (0:00:00.091) 0:00:00.091 ******* 2026-02-02 04:35:33.091674 | orchestrator | ok: [localhost] 2026-02-02 04:35:33.091693 | orchestrator | 2026-02-02 04:35:33.091709 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-02 04:35:33.091726 | orchestrator | Monday 02 February 2026 04:34:56 +0000 (0:00:01.933) 0:00:02.025 ******* 2026-02-02 04:35:33.091742 | orchestrator | ok: [localhost] 2026-02-02 04:35:33.091759 | orchestrator | 2026-02-02 04:35:33.091775 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-02 04:35:33.091792 | orchestrator | Monday 02 February 2026 04:35:03 +0000 (0:00:07.328) 0:00:09.354 ******* 2026-02-02 04:35:33.091809 | orchestrator | changed: [localhost] 2026-02-02 04:35:33.091826 | orchestrator | 2026-02-02 04:35:33.091842 | orchestrator | TASK [Create public network] *************************************************** 2026-02-02 04:35:33.091858 | orchestrator | Monday 02 February 2026 04:35:09 +0000 (0:00:06.267) 0:00:15.621 ******* 2026-02-02 04:35:33.091874 | orchestrator | changed: [localhost] 2026-02-02 04:35:33.091892 | orchestrator | 2026-02-02 04:35:33.091909 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-02 04:35:33.091928 | orchestrator | Monday 02 February 2026 04:35:14 +0000 (0:00:04.995) 0:00:20.616 ******* 2026-02-02 04:35:33.091952 | orchestrator | changed: [localhost] 2026-02-02 04:35:33.091969 | orchestrator | 2026-02-02 04:35:33.091986 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-02 04:35:33.092003 | orchestrator | Monday 02 February 2026 04:35:20 +0000 (0:00:06.296) 0:00:26.913 ******* 2026-02-02 04:35:33.092021 | orchestrator | changed: [localhost] 2026-02-02 04:35:33.092038 | orchestrator | 2026-02-02 04:35:33.092056 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-02 04:35:33.092075 | orchestrator | Monday 02 February 2026 04:35:25 +0000 (0:00:04.369) 0:00:31.282 ******* 2026-02-02 04:35:33.092092 | orchestrator | changed: [localhost] 2026-02-02 04:35:33.092110 | orchestrator | 2026-02-02 04:35:33.092128 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-02 04:35:33.092161 | orchestrator | Monday 02 February 2026 04:35:29 +0000 (0:00:03.847) 0:00:35.130 ******* 2026-02-02 04:35:33.092208 | orchestrator | ok: [localhost] 2026-02-02 04:35:33.092224 | orchestrator | 2026-02-02 04:35:33.092242 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:35:33.092261 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 04:35:33.092280 | orchestrator | 2026-02-02 04:35:33.092297 | orchestrator | 2026-02-02 04:35:33.092315 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:35:33.092334 | orchestrator | Monday 02 February 2026 04:35:32 +0000 (0:00:03.693) 0:00:38.824 ******* 2026-02-02 04:35:33.092351 | orchestrator | =============================================================================== 2026-02-02 04:35:33.092368 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.33s 2026-02-02 04:35:33.092385 | orchestrator | Set public network to default ------------------------------------------- 6.30s 2026-02-02 04:35:33.092403 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.27s 2026-02-02 04:35:33.092421 | orchestrator | Create public network --------------------------------------------------- 5.00s 2026-02-02 04:35:33.092466 | orchestrator | Create public subnet ---------------------------------------------------- 4.37s 2026-02-02 04:35:33.092477 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.85s 2026-02-02 04:35:33.092488 | orchestrator | Create manager role ----------------------------------------------------- 3.69s 2026-02-02 04:35:33.092498 | orchestrator | Gathering Facts --------------------------------------------------------- 1.93s 2026-02-02 04:35:35.579867 | orchestrator | 2026-02-02 04:35:35 | INFO  | It takes a moment until task cdf8f954-c478-4767-a75d-b1d04b024bda (image-manager) has been started and output is visible here. 2026-02-02 04:36:17.055844 | orchestrator | 2026-02-02 04:35:38 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-02 04:36:17.055925 | orchestrator | 2026-02-02 04:35:38 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-02 04:36:17.055932 | orchestrator | 2026-02-02 04:35:38 | INFO  | Importing image Cirros 0.6.2 2026-02-02 04:36:17.055937 | orchestrator | 2026-02-02 04:35:38 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-02 04:36:17.055943 | orchestrator | 2026-02-02 04:35:40 | INFO  | Waiting for image to leave queued state... 2026-02-02 04:36:17.055948 | orchestrator | 2026-02-02 04:35:42 | INFO  | Waiting for import to complete... 2026-02-02 04:36:17.055952 | orchestrator | 2026-02-02 04:35:52 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-02 04:36:17.055956 | orchestrator | 2026-02-02 04:35:52 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-02 04:36:17.055960 | orchestrator | 2026-02-02 04:35:52 | INFO  | Setting internal_version = 0.6.2 2026-02-02 04:36:17.055965 | orchestrator | 2026-02-02 04:35:52 | INFO  | Setting image_original_user = cirros 2026-02-02 04:36:17.055969 | orchestrator | 2026-02-02 04:35:52 | INFO  | Adding tag os:cirros 2026-02-02 04:36:17.055973 | orchestrator | 2026-02-02 04:35:53 | INFO  | Setting property architecture: x86_64 2026-02-02 04:36:17.055977 | orchestrator | 2026-02-02 04:35:53 | INFO  | Setting property hw_disk_bus: scsi 2026-02-02 04:36:17.055980 | orchestrator | 2026-02-02 04:35:53 | INFO  | Setting property hw_rng_model: virtio 2026-02-02 04:36:17.055984 | orchestrator | 2026-02-02 04:35:53 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-02 04:36:17.055988 | orchestrator | 2026-02-02 04:35:54 | INFO  | Setting property hw_watchdog_action: reset 2026-02-02 04:36:17.055992 | orchestrator | 2026-02-02 04:35:54 | INFO  | Setting property hypervisor_type: qemu 2026-02-02 04:36:17.055996 | orchestrator | 2026-02-02 04:35:54 | INFO  | Setting property os_distro: cirros 2026-02-02 04:36:17.056000 | orchestrator | 2026-02-02 04:35:55 | INFO  | Setting property os_purpose: minimal 2026-02-02 04:36:17.056004 | orchestrator | 2026-02-02 04:35:55 | INFO  | Setting property replace_frequency: never 2026-02-02 04:36:17.056007 | orchestrator | 2026-02-02 04:35:55 | INFO  | Setting property uuid_validity: none 2026-02-02 04:36:17.056011 | orchestrator | 2026-02-02 04:35:55 | INFO  | Setting property provided_until: none 2026-02-02 04:36:17.056015 | orchestrator | 2026-02-02 04:35:56 | INFO  | Setting property image_description: Cirros 2026-02-02 04:36:17.056019 | orchestrator | 2026-02-02 04:35:56 | INFO  | Setting property image_name: Cirros 2026-02-02 04:36:17.056022 | orchestrator | 2026-02-02 04:35:56 | INFO  | Setting property internal_version: 0.6.2 2026-02-02 04:36:17.056026 | orchestrator | 2026-02-02 04:35:56 | INFO  | Setting property image_original_user: cirros 2026-02-02 04:36:17.056044 | orchestrator | 2026-02-02 04:35:56 | INFO  | Setting property os_version: 0.6.2 2026-02-02 04:36:17.056053 | orchestrator | 2026-02-02 04:35:57 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-02 04:36:17.056058 | orchestrator | 2026-02-02 04:35:57 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-02 04:36:17.056062 | orchestrator | 2026-02-02 04:35:57 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-02 04:36:17.056066 | orchestrator | 2026-02-02 04:35:57 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-02 04:36:17.056070 | orchestrator | 2026-02-02 04:35:57 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-02 04:36:17.056073 | orchestrator | 2026-02-02 04:35:57 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-02 04:36:17.056080 | orchestrator | 2026-02-02 04:35:58 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-02 04:36:17.056084 | orchestrator | 2026-02-02 04:35:58 | INFO  | Importing image Cirros 0.6.3 2026-02-02 04:36:17.056088 | orchestrator | 2026-02-02 04:35:58 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-02 04:36:17.056092 | orchestrator | 2026-02-02 04:35:58 | INFO  | Waiting for image to leave queued state... 2026-02-02 04:36:17.056095 | orchestrator | 2026-02-02 04:36:00 | INFO  | Waiting for import to complete... 2026-02-02 04:36:17.056108 | orchestrator | 2026-02-02 04:36:10 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-02 04:36:17.056113 | orchestrator | 2026-02-02 04:36:11 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-02 04:36:17.056116 | orchestrator | 2026-02-02 04:36:11 | INFO  | Setting internal_version = 0.6.3 2026-02-02 04:36:17.056120 | orchestrator | 2026-02-02 04:36:11 | INFO  | Setting image_original_user = cirros 2026-02-02 04:36:17.056124 | orchestrator | 2026-02-02 04:36:11 | INFO  | Adding tag os:cirros 2026-02-02 04:36:17.056128 | orchestrator | 2026-02-02 04:36:11 | INFO  | Setting property architecture: x86_64 2026-02-02 04:36:17.056132 | orchestrator | 2026-02-02 04:36:11 | INFO  | Setting property hw_disk_bus: scsi 2026-02-02 04:36:17.056135 | orchestrator | 2026-02-02 04:36:12 | INFO  | Setting property hw_rng_model: virtio 2026-02-02 04:36:17.056139 | orchestrator | 2026-02-02 04:36:12 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-02 04:36:17.056143 | orchestrator | 2026-02-02 04:36:12 | INFO  | Setting property hw_watchdog_action: reset 2026-02-02 04:36:17.056147 | orchestrator | 2026-02-02 04:36:12 | INFO  | Setting property hypervisor_type: qemu 2026-02-02 04:36:17.056150 | orchestrator | 2026-02-02 04:36:13 | INFO  | Setting property os_distro: cirros 2026-02-02 04:36:17.056154 | orchestrator | 2026-02-02 04:36:13 | INFO  | Setting property os_purpose: minimal 2026-02-02 04:36:17.056158 | orchestrator | 2026-02-02 04:36:13 | INFO  | Setting property replace_frequency: never 2026-02-02 04:36:17.056162 | orchestrator | 2026-02-02 04:36:13 | INFO  | Setting property uuid_validity: none 2026-02-02 04:36:17.056166 | orchestrator | 2026-02-02 04:36:14 | INFO  | Setting property provided_until: none 2026-02-02 04:36:17.056170 | orchestrator | 2026-02-02 04:36:14 | INFO  | Setting property image_description: Cirros 2026-02-02 04:36:17.056173 | orchestrator | 2026-02-02 04:36:14 | INFO  | Setting property image_name: Cirros 2026-02-02 04:36:17.056177 | orchestrator | 2026-02-02 04:36:14 | INFO  | Setting property internal_version: 0.6.3 2026-02-02 04:36:17.056185 | orchestrator | 2026-02-02 04:36:15 | INFO  | Setting property image_original_user: cirros 2026-02-02 04:36:17.056189 | orchestrator | 2026-02-02 04:36:15 | INFO  | Setting property os_version: 0.6.3 2026-02-02 04:36:17.056193 | orchestrator | 2026-02-02 04:36:15 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-02 04:36:17.056196 | orchestrator | 2026-02-02 04:36:15 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-02 04:36:17.056200 | orchestrator | 2026-02-02 04:36:16 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-02 04:36:17.056225 | orchestrator | 2026-02-02 04:36:16 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-02 04:36:17.056229 | orchestrator | 2026-02-02 04:36:16 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-02 04:36:17.531662 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-02 04:36:20.105869 | orchestrator | 2026-02-02 04:36:20 | INFO  | date: 2026-02-02 2026-02-02 04:36:20.105968 | orchestrator | 2026-02-02 04:36:20 | INFO  | image: octavia-amphora-haproxy-2024.2.20260202.qcow2 2026-02-02 04:36:20.106143 | orchestrator | 2026-02-02 04:36:20 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260202.qcow2 2026-02-02 04:36:20.106176 | orchestrator | 2026-02-02 04:36:20 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260202.qcow2.CHECKSUM 2026-02-02 04:36:20.462118 | orchestrator | 2026-02-02 04:36:20 | INFO  | checksum: e9239e7a5eb4857bba1f8db4decf84651c6e5e1178bdb3ff3f7982c716178bbd 2026-02-02 04:36:20.550268 | orchestrator | 2026-02-02 04:36:20 | INFO  | It takes a moment until task 6d502c46-9bd7-4137-b247-8a56180238ce (image-manager) has been started and output is visible here. 2026-02-02 04:37:43.193534 | orchestrator | 2026-02-02 04:36:22 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-02' 2026-02-02 04:37:43.193684 | orchestrator | 2026-02-02 04:36:23 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260202.qcow2: 200 2026-02-02 04:37:43.193706 | orchestrator | 2026-02-02 04:36:23 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-02 2026-02-02 04:37:43.193718 | orchestrator | 2026-02-02 04:36:23 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260202.qcow2 2026-02-02 04:37:43.193729 | orchestrator | 2026-02-02 04:36:24 | INFO  | Waiting for image to leave queued state... 2026-02-02 04:37:43.193739 | orchestrator | 2026-02-02 04:36:26 | INFO  | Waiting for import to complete... 2026-02-02 04:37:43.193746 | orchestrator | 2026-02-02 04:36:36 | INFO  | Waiting for import to complete... 2026-02-02 04:37:43.193752 | orchestrator | 2026-02-02 04:36:46 | INFO  | Waiting for import to complete... 2026-02-02 04:37:43.193758 | orchestrator | 2026-02-02 04:36:56 | INFO  | Waiting for import to complete... 2026-02-02 04:37:43.193765 | orchestrator | 2026-02-02 04:37:07 | INFO  | Waiting for import to complete... 2026-02-02 04:37:43.193771 | orchestrator | 2026-02-02 04:37:17 | INFO  | Waiting for import to complete... 2026-02-02 04:37:43.193777 | orchestrator | 2026-02-02 04:37:27 | INFO  | Waiting for import to complete... 2026-02-02 04:37:43.193782 | orchestrator | 2026-02-02 04:37:37 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-02' successfully completed, reloading images 2026-02-02 04:37:43.193789 | orchestrator | 2026-02-02 04:37:38 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-02' 2026-02-02 04:37:43.193813 | orchestrator | 2026-02-02 04:37:38 | INFO  | Setting internal_version = 2026-02-02 2026-02-02 04:37:43.193819 | orchestrator | 2026-02-02 04:37:38 | INFO  | Setting image_original_user = ubuntu 2026-02-02 04:37:43.193825 | orchestrator | 2026-02-02 04:37:38 | INFO  | Adding tag amphora 2026-02-02 04:37:43.193831 | orchestrator | 2026-02-02 04:37:38 | INFO  | Adding tag os:ubuntu 2026-02-02 04:37:43.193836 | orchestrator | 2026-02-02 04:37:38 | INFO  | Setting property architecture: x86_64 2026-02-02 04:37:43.193842 | orchestrator | 2026-02-02 04:37:38 | INFO  | Setting property hw_disk_bus: scsi 2026-02-02 04:37:43.193847 | orchestrator | 2026-02-02 04:37:38 | INFO  | Setting property hw_rng_model: virtio 2026-02-02 04:37:43.193853 | orchestrator | 2026-02-02 04:37:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-02 04:37:43.193858 | orchestrator | 2026-02-02 04:37:39 | INFO  | Setting property hw_watchdog_action: reset 2026-02-02 04:37:43.193863 | orchestrator | 2026-02-02 04:37:39 | INFO  | Setting property hypervisor_type: qemu 2026-02-02 04:37:43.193869 | orchestrator | 2026-02-02 04:37:39 | INFO  | Setting property os_distro: ubuntu 2026-02-02 04:37:43.193874 | orchestrator | 2026-02-02 04:37:40 | INFO  | Setting property replace_frequency: quarterly 2026-02-02 04:37:43.193880 | orchestrator | 2026-02-02 04:37:40 | INFO  | Setting property uuid_validity: last-1 2026-02-02 04:37:43.193885 | orchestrator | 2026-02-02 04:37:40 | INFO  | Setting property provided_until: none 2026-02-02 04:37:43.193890 | orchestrator | 2026-02-02 04:37:40 | INFO  | Setting property os_purpose: network 2026-02-02 04:37:43.193907 | orchestrator | 2026-02-02 04:37:41 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-02 04:37:43.193913 | orchestrator | 2026-02-02 04:37:41 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-02 04:37:43.193919 | orchestrator | 2026-02-02 04:37:41 | INFO  | Setting property internal_version: 2026-02-02 2026-02-02 04:37:43.193924 | orchestrator | 2026-02-02 04:37:41 | INFO  | Setting property image_original_user: ubuntu 2026-02-02 04:37:43.193929 | orchestrator | 2026-02-02 04:37:42 | INFO  | Setting property os_version: 2026-02-02 2026-02-02 04:37:43.193935 | orchestrator | 2026-02-02 04:37:42 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260202.qcow2 2026-02-02 04:37:43.193940 | orchestrator | 2026-02-02 04:37:42 | INFO  | Setting property image_build_date: 2026-02-02 2026-02-02 04:37:43.193946 | orchestrator | 2026-02-02 04:37:42 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-02' 2026-02-02 04:37:43.193964 | orchestrator | 2026-02-02 04:37:42 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-02' 2026-02-02 04:37:43.193970 | orchestrator | 2026-02-02 04:37:43 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-02 04:37:43.193976 | orchestrator | 2026-02-02 04:37:43 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-02 04:37:43.193982 | orchestrator | 2026-02-02 04:37:43 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-02 04:37:43.193991 | orchestrator | 2026-02-02 04:37:43 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-02 04:37:43.975701 | orchestrator | ok: Runtime: 0:03:16.950377 2026-02-02 04:37:43.993248 | 2026-02-02 04:37:43.993383 | TASK [Run checks] 2026-02-02 04:37:44.757140 | orchestrator | + set -e 2026-02-02 04:37:44.757400 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 04:37:44.757416 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 04:37:44.757424 | orchestrator | ++ INTERACTIVE=false 2026-02-02 04:37:44.757430 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 04:37:44.757435 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 04:37:44.757441 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-02 04:37:44.758544 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-02 04:37:44.762145 | orchestrator | 2026-02-02 04:37:44.762189 | orchestrator | # CHECK 2026-02-02 04:37:44.762197 | orchestrator | 2026-02-02 04:37:44.762204 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 04:37:44.762213 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 04:37:44.762220 | orchestrator | + echo 2026-02-02 04:37:44.762226 | orchestrator | + echo '# CHECK' 2026-02-02 04:37:44.762233 | orchestrator | + echo 2026-02-02 04:37:44.762244 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-02 04:37:44.763190 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-02 04:37:44.807937 | orchestrator | 2026-02-02 04:37:44.808037 | orchestrator | ## Containers @ testbed-manager 2026-02-02 04:37:44.808059 | orchestrator | 2026-02-02 04:37:44.808077 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-02 04:37:44.808092 | orchestrator | + echo 2026-02-02 04:37:44.808107 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-02 04:37:44.808122 | orchestrator | + echo 2026-02-02 04:37:44.808137 | orchestrator | + osism container testbed-manager ps 2026-02-02 04:37:46.812805 | orchestrator | 2026-02-02 04:37:46 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-02 04:37:47.203596 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-02 04:37:47.203688 | orchestrator | c6cf8d77e9e0 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 8 minutes prometheus_blackbox_exporter 2026-02-02 04:37:47.203702 | orchestrator | c3ef4c3359c1 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-02-02 04:37:47.203708 | orchestrator | 39ecb9f6b2ff registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-02 04:37:47.203714 | orchestrator | 2167ecb5db2a registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-02 04:37:47.203720 | orchestrator | 847f3ae6404c registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-02-02 04:37:47.203729 | orchestrator | fb90f59892a5 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 57 minutes ago Up 57 minutes cephclient 2026-02-02 04:37:47.203735 | orchestrator | 79ae511521c9 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-02 04:37:47.203740 | orchestrator | b8cee300ee4a registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-02 04:37:47.203761 | orchestrator | a9f29a7130c1 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-02 04:37:47.203767 | orchestrator | 776a96be7298 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-02 04:37:47.203772 | orchestrator | caadfccc9fc3 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-02 04:37:47.203778 | orchestrator | 263f55a999e6 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-02 04:37:47.203784 | orchestrator | 41feb4a747f5 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-02 04:37:47.203789 | orchestrator | d5b508609f42 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-02 04:37:47.203809 | orchestrator | 43b07d6698db registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-02 04:37:47.203821 | orchestrator | a186f74f1f98 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-02 04:37:47.203827 | orchestrator | 5046e2f3761a registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-02 04:37:47.203832 | orchestrator | f37926e6bc04 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-02 04:37:47.203837 | orchestrator | b64cf1d7fc02 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-02 04:37:47.203843 | orchestrator | 9607206d06da registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-02 04:37:47.203848 | orchestrator | b75ecb614ca7 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-02 04:37:47.203854 | orchestrator | 18947649325f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-02 04:37:47.203863 | orchestrator | b5e183ba20d7 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-02 04:37:47.203868 | orchestrator | 19091736158f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-02 04:37:47.203874 | orchestrator | b9cac9c51f55 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-02 04:37:47.203879 | orchestrator | ecd4e449bd00 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-02 04:37:47.203885 | orchestrator | 9c0647c714a3 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-02 04:37:47.203890 | orchestrator | fc4e47247529 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-02 04:37:47.203895 | orchestrator | 346c674a661a registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-02 04:37:47.203903 | orchestrator | 6bd339ff3891 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-02 04:37:47.563491 | orchestrator | 2026-02-02 04:37:47.563602 | orchestrator | ## Images @ testbed-manager 2026-02-02 04:37:47.563621 | orchestrator | 2026-02-02 04:37:47.563634 | orchestrator | + echo 2026-02-02 04:37:47.563647 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-02 04:37:47.563662 | orchestrator | + echo 2026-02-02 04:37:47.563679 | orchestrator | + osism container testbed-manager images 2026-02-02 04:37:49.903035 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-02 04:37:49.903153 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 0879af7a1458 25 hours ago 238MB 2026-02-02 04:37:49.903168 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 5 days ago 41.4MB 2026-02-02 04:37:49.903176 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-02 04:37:49.903183 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-02 04:37:49.903190 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-02 04:37:49.903197 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-02 04:37:49.903204 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-02 04:37:49.903212 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-02 04:37:49.903219 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-02 04:37:49.903245 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-02 04:37:49.903252 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-02 04:37:49.903259 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-02 04:37:49.903266 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-02 04:37:49.903326 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-02 04:37:49.903335 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-02 04:37:49.903342 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-02 04:37:49.903348 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-02 04:37:49.903355 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-02 04:37:49.903362 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 2 months ago 334MB 2026-02-02 04:37:49.903369 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 3 months ago 742MB 2026-02-02 04:37:49.903375 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-02 04:37:49.903382 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 6 months ago 226MB 2026-02-02 04:37:49.903389 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 8 months ago 453MB 2026-02-02 04:37:49.903395 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-02 04:37:49.903402 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-02 04:37:50.270797 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-02 04:37:50.271184 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-02 04:37:50.326785 | orchestrator | 2026-02-02 04:37:50.326883 | orchestrator | ## Containers @ testbed-node-0 2026-02-02 04:37:50.326899 | orchestrator | 2026-02-02 04:37:50.326911 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-02 04:37:50.326923 | orchestrator | + echo 2026-02-02 04:37:50.326935 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-02 04:37:50.326955 | orchestrator | + echo 2026-02-02 04:37:50.326967 | orchestrator | + osism container testbed-node-0 ps 2026-02-02 04:37:52.778875 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-02 04:37:52.779010 | orchestrator | 248cc1dedc7c registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-02 04:37:52.779065 | orchestrator | 3272fe9022ca registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-02 04:37:52.779089 | orchestrator | f67020175c83 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-02-02 04:37:52.779110 | orchestrator | 991037a85a7e registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-02 04:37:52.779167 | orchestrator | bae089aa1e50 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-02 04:37:52.779188 | orchestrator | d1c84f69d76d registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-02-02 04:37:52.779216 | orchestrator | b25ecf6995ca registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-02 04:37:52.779362 | orchestrator | 2e0a7f7fbb76 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-02 04:37:52.779382 | orchestrator | e411c0466b43 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-02 04:37:52.779401 | orchestrator | ed0112111850 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-02 04:37:52.779420 | orchestrator | 484da29133d6 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-02 04:37:52.779439 | orchestrator | dd0d1ea2ca7f registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-02-02 04:37:52.779457 | orchestrator | 66dd55995e92 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-02 04:37:52.779474 | orchestrator | b8344c9acc85 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-02-02 04:37:52.779491 | orchestrator | ed15ca61e7c0 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-02-02 04:37:52.779508 | orchestrator | b52a703ab94a registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-02 04:37:52.779527 | orchestrator | 6615d4ae296d registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-02 04:37:52.779545 | orchestrator | 7bc24295f874 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-02-02 04:37:52.779564 | orchestrator | d05cb9e3e7bc registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 20 minutes (healthy) octavia_worker 2026-02-02 04:37:52.779624 | orchestrator | 4aee59df3db5 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-02 04:37:52.779646 | orchestrator | 3d51543d4b5e registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-02 04:37:52.779666 | orchestrator | fbe2bdc6cb61 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-02 04:37:52.779702 | orchestrator | 257e4eab8295 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-02-02 04:37:52.779721 | orchestrator | c571242a0b55 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-02-02 04:37:52.779740 | orchestrator | 2259d7962c9e registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-02-02 04:37:52.779767 | orchestrator | 0fc765511621 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-02 04:37:52.779785 | orchestrator | b679c3918de8 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-02 04:37:52.779802 | orchestrator | 1cf084ad413c registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-02 04:37:52.779820 | orchestrator | 648b54261efb registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-02 04:37:52.779838 | orchestrator | 44c457583129 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-02 04:37:52.779855 | orchestrator | 24eb95d8f4ee registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-02 04:37:52.780080 | orchestrator | 620b1b5c6d21 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-02-02 04:37:52.780101 | orchestrator | 9b8e27746eac registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-02 04:37:52.780112 | orchestrator | 73312bb5b17e registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-02 04:37:52.780123 | orchestrator | 1598bedd4761 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-02-02 04:37:52.780134 | orchestrator | a0aa6e6d11ae registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-02 04:37:52.780145 | orchestrator | 17f2b4135fba registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-02 04:37:52.780157 | orchestrator | 3db36683441b registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-02-02 04:37:52.780167 | orchestrator | d9b0c917ff5a registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_apiserver 2026-02-02 04:37:52.780178 | orchestrator | 1ef990635a74 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-02 04:37:52.780202 | orchestrator | fe9e547e8868 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-02 04:37:52.780213 | orchestrator | b90d7e7b2d84 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-02 04:37:52.780232 | orchestrator | eb7b0ef5dddf registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-02 04:37:52.780243 | orchestrator | 48cc949448f2 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-02 04:37:52.780254 | orchestrator | 20c4e95db5b8 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-02 04:37:52.780265 | orchestrator | 0cc1b3c3072e registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-02 04:37:52.780314 | orchestrator | b5fc5c8ba25f registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-02-02 04:37:52.780328 | orchestrator | ba345b31c57e registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-02-02 04:37:52.780339 | orchestrator | 3cb2aaf49040 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-02 04:37:52.780349 | orchestrator | 5f41afa21c71 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 55 minutes ago Up 55 minutes ceph-mgr-testbed-node-0 2026-02-02 04:37:52.780360 | orchestrator | 95f9dc3e5775 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-02 04:37:52.780371 | orchestrator | fef826d0639c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-02 04:37:52.780398 | orchestrator | 591980ba0d0f registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-02 04:37:52.780410 | orchestrator | 22b96d92e106 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-02 04:37:52.780421 | orchestrator | fe776c6c4206 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-02 04:37:52.780432 | orchestrator | d0c4fe1282f4 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-02 04:37:52.780448 | orchestrator | 9c64d85a5a9c registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-02 04:37:52.780459 | orchestrator | 11630d5ad373 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-02 04:37:52.780485 | orchestrator | f9adb87b1410 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-02 04:37:52.780496 | orchestrator | 21a6c3bf7318 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-02 04:37:52.780507 | orchestrator | 9b420dc2770e registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-02 04:37:52.780517 | orchestrator | 5ecf782937c2 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-02 04:37:52.780528 | orchestrator | 5c3a3ca1f4b3 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-02 04:37:52.780539 | orchestrator | 1d3248a1f72b registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-02 04:37:52.780549 | orchestrator | 98b66d892137 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-02 04:37:52.780559 | orchestrator | d393c69787c7 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-02 04:37:52.780568 | orchestrator | 47c26a08d7ba registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-02 04:37:52.780578 | orchestrator | 8d85cd5b7020 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-02 04:37:52.780588 | orchestrator | eec80842c1da registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-02 04:37:52.780598 | orchestrator | afe281307fbf registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-02 04:37:52.780607 | orchestrator | c0abb57d9afb registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-02 04:37:53.136126 | orchestrator | 2026-02-02 04:37:53.136237 | orchestrator | ## Images @ testbed-node-0 2026-02-02 04:37:53.136254 | orchestrator | 2026-02-02 04:37:53.136267 | orchestrator | + echo 2026-02-02 04:37:53.136314 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-02 04:37:53.136328 | orchestrator | + echo 2026-02-02 04:37:53.136340 | orchestrator | + osism container testbed-node-0 images 2026-02-02 04:37:55.740893 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-02 04:37:55.741011 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-02 04:37:55.741028 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-02 04:37:55.741040 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-02 04:37:55.741051 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-02 04:37:55.741084 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-02 04:37:55.741097 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-02 04:37:55.741107 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-02 04:37:55.741119 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-02 04:37:55.741130 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-02 04:37:55.741140 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-02 04:37:55.741151 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-02 04:37:55.741162 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-02 04:37:55.741173 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-02 04:37:55.741183 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-02 04:37:55.741194 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-02 04:37:55.741207 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-02 04:37:55.741226 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-02 04:37:55.741252 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-02 04:37:55.741326 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-02 04:37:55.741347 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-02 04:37:55.741364 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-02 04:37:55.741382 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-02 04:37:55.741399 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-02 04:37:55.741418 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-02 04:37:55.741436 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-02 04:37:55.741455 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-02 04:37:55.741475 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-02 04:37:55.741504 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-02 04:37:55.741524 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-02 04:37:55.741543 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-02 04:37:55.741578 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-02 04:37:55.741620 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-02 04:37:55.741643 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-02 04:37:55.741662 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-02 04:37:55.741681 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-02 04:37:55.741693 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-02 04:37:55.741704 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-02 04:37:55.741714 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-02 04:37:55.741725 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-02 04:37:55.741736 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-02 04:37:55.741746 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-02 04:37:55.741757 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-02 04:37:55.741767 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-02 04:37:55.741778 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-02 04:37:55.741789 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-02 04:37:55.741799 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-02 04:37:55.741811 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-02 04:37:55.741822 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-02 04:37:55.741833 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-02 04:37:55.741843 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-02 04:37:55.741854 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-02 04:37:55.741865 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-02 04:37:55.741876 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-02 04:37:55.741886 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-02 04:37:55.741897 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-02 04:37:55.741907 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-02 04:37:55.741926 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-02 04:37:55.741937 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-02 04:37:55.741953 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-02 04:37:55.741964 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-02 04:37:55.741975 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-02 04:37:55.741986 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-02 04:37:55.741997 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-02 04:37:55.742067 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-02 04:37:55.742083 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-02 04:37:55.742094 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-02 04:37:55.742105 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-02 04:37:55.742115 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-02 04:37:55.742126 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-02-02 04:37:56.090486 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-02 04:37:56.090610 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-02 04:37:56.138115 | orchestrator | 2026-02-02 04:37:56.138201 | orchestrator | ## Containers @ testbed-node-1 2026-02-02 04:37:56.138220 | orchestrator | 2026-02-02 04:37:56.138233 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-02 04:37:56.138244 | orchestrator | + echo 2026-02-02 04:37:56.138256 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-02 04:37:56.138268 | orchestrator | + echo 2026-02-02 04:37:56.138328 | orchestrator | + osism container testbed-node-1 ps 2026-02-02 04:37:58.607451 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-02 04:37:58.607534 | orchestrator | 0f04c2c095ec registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-02 04:37:58.607543 | orchestrator | 5dfd7ca2a752 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-02 04:37:58.607551 | orchestrator | eb2e35443f02 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-02 04:37:58.607557 | orchestrator | efa604214a34 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-02 04:37:58.607566 | orchestrator | a2f13f726866 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-02 04:37:58.607572 | orchestrator | 7128f895be6d registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-02-02 04:37:58.607596 | orchestrator | a67b501a0822 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-02 04:37:58.607603 | orchestrator | f9f436b5b663 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-02 04:37:58.607610 | orchestrator | 37b5e8a7dbe1 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-02 04:37:58.607616 | orchestrator | 826cd48ee242 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-02 04:37:58.607623 | orchestrator | 25e554e85945 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-02 04:37:58.607629 | orchestrator | 594a49ba264e registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-02-02 04:37:58.607643 | orchestrator | 6e89c3c1b233 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-02 04:37:58.607650 | orchestrator | dc3430cae808 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-02-02 04:37:58.607656 | orchestrator | 3cb49d1c3b15 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-02 04:37:58.607663 | orchestrator | df480bb49b03 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-02 04:37:58.607669 | orchestrator | 508bfd8267b4 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-02 04:37:58.607675 | orchestrator | 228c9d3e32e8 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-02-02 04:37:58.607682 | orchestrator | 29a8a2864ea2 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-02 04:37:58.607701 | orchestrator | 7f7413cd194d registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-02 04:37:58.607708 | orchestrator | 18dbd7a80505 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-02 04:37:58.607715 | orchestrator | da21f78daa12 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-02 04:37:58.607722 | orchestrator | 3afc7ab3b7f6 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-02-02 04:37:58.607728 | orchestrator | 17cd2255dbba registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-02-02 04:37:58.607739 | orchestrator | 399708727a98 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 25 minutes (healthy) designate_mdns 2026-02-02 04:37:58.607745 | orchestrator | 65f6135427ab registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-02 04:37:58.607752 | orchestrator | 1b6d082dfb44 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-02 04:37:58.607758 | orchestrator | 158d62b8ae86 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-02 04:37:58.607764 | orchestrator | f04b926a795f registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-02 04:37:58.607771 | orchestrator | b7060cbe4971 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-02 04:37:58.607777 | orchestrator | cfde7b6e2e1e registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-02 04:37:58.607784 | orchestrator | 071a73be4df9 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-02-02 04:37:58.607790 | orchestrator | b521cc79d50c registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-02 04:37:58.607796 | orchestrator | ff96bcf22ed8 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-02 04:37:58.607802 | orchestrator | 04badb311381 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-02-02 04:37:58.607808 | orchestrator | 7b2b6feaac10 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_api 2026-02-02 04:37:58.607818 | orchestrator | 2733c77271bf registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-02 04:37:58.607825 | orchestrator | a8842a587af5 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-02-02 04:37:58.607831 | orchestrator | 23b4bb239c76 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_apiserver 2026-02-02 04:37:58.607842 | orchestrator | de1197fa4d24 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-02 04:37:58.607848 | orchestrator | 16a0a96f99de registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-02 04:37:58.607859 | orchestrator | 03a165201739 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-02 04:37:58.607865 | orchestrator | 469467d337b5 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-02 04:37:58.607872 | orchestrator | dca54b00865c registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-02 04:37:58.607878 | orchestrator | cc839d9a199a registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 47 minutes (healthy) neutron_server 2026-02-02 04:37:58.607884 | orchestrator | d3c71cdbeb60 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-02 04:37:58.607890 | orchestrator | 544f623c54ad registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-02-02 04:37:58.607897 | orchestrator | a0339a60c652 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-02-02 04:37:58.607903 | orchestrator | c6dd78b667f9 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_ssh 2026-02-02 04:37:58.607909 | orchestrator | da14b7f5c67d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 55 minutes ago Up 55 minutes ceph-mgr-testbed-node-1 2026-02-02 04:37:58.607916 | orchestrator | f2823b76a97a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-02 04:37:58.607923 | orchestrator | a42e682d4965 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-02 04:37:58.607929 | orchestrator | 4707b5d59b07 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-02 04:37:58.607935 | orchestrator | 397d3a731463 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-02 04:37:58.607942 | orchestrator | 7b41097622f4 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-02 04:37:58.607948 | orchestrator | ef6a797117d8 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-02 04:37:58.607954 | orchestrator | 5fd867240dd6 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-02 04:37:58.607961 | orchestrator | ef13b8323f69 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-02 04:37:58.607967 | orchestrator | 85bf55b9dd0d registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-02 04:37:58.607980 | orchestrator | 1ed962f01d43 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-02 04:37:58.607987 | orchestrator | 1cd3058666d1 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-02 04:37:58.607995 | orchestrator | 07c314b1289d registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-02 04:37:58.608003 | orchestrator | 18a9c9343420 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-02 04:37:58.608010 | orchestrator | e43e40816d7c registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-02 04:37:58.608021 | orchestrator | c16baebbb29b registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-02 04:37:58.608028 | orchestrator | 7298b83094bd registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-02 04:37:58.608036 | orchestrator | 79b29a346e23 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-02 04:37:58.608043 | orchestrator | 15e173c1588b registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-02 04:37:58.608050 | orchestrator | 283be3561a13 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-02 04:37:58.608061 | orchestrator | 4199d6078b87 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-02 04:37:58.608069 | orchestrator | b0349340caaf registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-02 04:37:58.978528 | orchestrator | 2026-02-02 04:37:58.978656 | orchestrator | ## Images @ testbed-node-1 2026-02-02 04:37:58.978674 | orchestrator | 2026-02-02 04:37:58.978688 | orchestrator | + echo 2026-02-02 04:37:58.978699 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-02 04:37:58.978711 | orchestrator | + echo 2026-02-02 04:37:58.978723 | orchestrator | + osism container testbed-node-1 images 2026-02-02 04:38:01.433395 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-02 04:38:01.433507 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-02 04:38:01.433523 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-02 04:38:01.433535 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-02 04:38:01.433547 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-02 04:38:01.433558 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-02 04:38:01.433569 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-02 04:38:01.433606 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-02 04:38:01.433617 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-02 04:38:01.433628 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-02 04:38:01.433639 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-02 04:38:01.433650 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-02 04:38:01.433660 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-02 04:38:01.433671 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-02 04:38:01.433682 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-02 04:38:01.433693 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-02 04:38:01.433704 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-02 04:38:01.433714 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-02 04:38:01.433725 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-02 04:38:01.433736 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-02 04:38:01.433747 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-02 04:38:01.433757 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-02 04:38:01.433768 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-02 04:38:01.433778 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-02 04:38:01.433789 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-02 04:38:01.433800 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-02 04:38:01.433810 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-02 04:38:01.433821 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-02 04:38:01.433832 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-02 04:38:01.433843 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-02 04:38:01.433853 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-02 04:38:01.433864 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-02 04:38:01.433894 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-02 04:38:01.433914 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-02 04:38:01.433925 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-02 04:38:01.433936 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-02 04:38:01.433947 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-02 04:38:01.433957 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-02 04:38:01.433987 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-02 04:38:01.433998 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-02 04:38:01.434009 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-02 04:38:01.434083 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-02 04:38:01.434095 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-02 04:38:01.434106 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-02 04:38:01.434116 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-02 04:38:01.434127 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-02 04:38:01.434137 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-02 04:38:01.434148 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-02 04:38:01.434159 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-02 04:38:01.434170 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-02 04:38:01.434181 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-02 04:38:01.434192 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-02 04:38:01.434203 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-02 04:38:01.434213 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-02 04:38:01.434224 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-02 04:38:01.434235 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-02 04:38:01.434246 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-02 04:38:01.434256 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-02 04:38:01.434267 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-02 04:38:01.434278 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-02 04:38:01.434330 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-02 04:38:01.434342 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-02 04:38:01.434353 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-02 04:38:01.434364 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-02 04:38:01.434383 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-02 04:38:01.434395 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-02 04:38:01.434405 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-02 04:38:01.434416 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-02 04:38:01.434427 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-02 04:38:01.434438 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-02-02 04:38:01.812144 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-02 04:38:01.812845 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-02 04:38:01.869530 | orchestrator | 2026-02-02 04:38:01.869620 | orchestrator | ## Containers @ testbed-node-2 2026-02-02 04:38:01.869639 | orchestrator | 2026-02-02 04:38:01.869651 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-02 04:38:01.869660 | orchestrator | + echo 2026-02-02 04:38:01.869669 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-02 04:38:01.869679 | orchestrator | + echo 2026-02-02 04:38:01.869688 | orchestrator | + osism container testbed-node-2 ps 2026-02-02 04:38:04.321064 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-02 04:38:04.321177 | orchestrator | c53dd0670c33 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-02 04:38:04.321201 | orchestrator | 90eac90a7d50 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-02 04:38:04.321219 | orchestrator | 846598e7f59a registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-02 04:38:04.321236 | orchestrator | b71caa100d6a registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-02 04:38:04.321255 | orchestrator | 8710c012c100 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-02 04:38:04.321272 | orchestrator | f47c412a0b37 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-02 04:38:04.321321 | orchestrator | c5530554e08d registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-02 04:38:04.321339 | orchestrator | a3e519aa6ccb registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-02 04:38:04.321385 | orchestrator | 835cfc4aada2 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-02 04:38:04.321405 | orchestrator | a4174b60629e registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-02 04:38:04.321421 | orchestrator | 3903b22e08b4 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-02 04:38:04.321437 | orchestrator | 0e7f72a5d743 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-02-02 04:38:04.321477 | orchestrator | e84ce476c4ec registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-02 04:38:04.321494 | orchestrator | f8721924055f registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-02-02 04:38:04.321511 | orchestrator | 05639dd54db3 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-02 04:38:04.321527 | orchestrator | 312fabe4623a registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-02 04:38:04.321543 | orchestrator | 5538d7b9fd65 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-02 04:38:04.321560 | orchestrator | 0acd0a969c97 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-02-02 04:38:04.321577 | orchestrator | 1edd05016293 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-02 04:38:04.321612 | orchestrator | 112ee7a7ad18 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-02 04:38:04.321631 | orchestrator | 3ed5245b979d registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-02 04:38:04.321649 | orchestrator | 7734b2cf7035 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-02 04:38:04.321665 | orchestrator | 6ffb4a1e1c81 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-02-02 04:38:04.321682 | orchestrator | 180b4d1f5b73 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-02-02 04:38:04.321698 | orchestrator | 2e3692e36398 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-02 04:38:04.321726 | orchestrator | fd9aafb65c24 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-02 04:38:04.321743 | orchestrator | b77c49f52429 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-02 04:38:04.321760 | orchestrator | e2c7c2091cbc registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-02 04:38:04.321776 | orchestrator | 104f20db3bd9 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-02 04:38:04.321793 | orchestrator | 0a59f07adc3c registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-02 04:38:04.321810 | orchestrator | a774af8d4a0a registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-02 04:38:04.321828 | orchestrator | e49190740aac registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-02-02 04:38:04.321845 | orchestrator | f0db77f5bab7 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-02 04:38:04.321860 | orchestrator | 40bbdad018d2 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-02 04:38:04.321873 | orchestrator | fe675588a021 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-02-02 04:38:04.321885 | orchestrator | 73bf903504f8 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 30 minutes (healthy) cinder_api 2026-02-02 04:38:04.321897 | orchestrator | ab6e8af58475 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-02 04:38:04.321909 | orchestrator | dd47f9873670 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-02-02 04:38:04.321921 | orchestrator | 6b1b4852c537 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-02 04:38:04.321947 | orchestrator | 26c92a5b16a6 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-02 04:38:04.321960 | orchestrator | 67ddbb45d8e7 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-02 04:38:04.321971 | orchestrator | 488235489a1d registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-02 04:38:04.321982 | orchestrator | 036e2db7cb59 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-02 04:38:04.321999 | orchestrator | 50b4b5e3bcb4 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-02 04:38:04.322009 | orchestrator | d27c1c461e8a registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-02 04:38:04.322072 | orchestrator | 8ba305da429a registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-02 04:38:04.322082 | orchestrator | dc0d111882b4 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-02-02 04:38:04.322092 | orchestrator | 5295dba03c04 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-02-02 04:38:04.322102 | orchestrator | dbf9531bd476 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_ssh 2026-02-02 04:38:04.322112 | orchestrator | a0da86670500 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 55 minutes ago Up 55 minutes ceph-mgr-testbed-node-2 2026-02-02 04:38:04.322121 | orchestrator | 781d5c945d64 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-02 04:38:04.322137 | orchestrator | 39d29fabc2d2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-02 04:38:04.322147 | orchestrator | caa6a7f9744d registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-02 04:38:04.322161 | orchestrator | 4d0418f673e9 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-02 04:38:04.322171 | orchestrator | 7d6b41b36033 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-02 04:38:04.322180 | orchestrator | 2764192384f6 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-02 04:38:04.322190 | orchestrator | 8fda3409fe43 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-02 04:38:04.322200 | orchestrator | b4cc1a134c28 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-02 04:38:04.322209 | orchestrator | 732e27d719b8 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-02 04:38:04.322227 | orchestrator | b501788633ac registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-02 04:38:04.322238 | orchestrator | 718b1639ebff registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-02 04:38:04.322254 | orchestrator | 1184978385bc registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-02 04:38:04.322264 | orchestrator | 7001349be14d registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-02 04:38:04.322274 | orchestrator | 5f68eb061f37 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-02 04:38:04.322353 | orchestrator | 6822ea667be5 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-02 04:38:04.322374 | orchestrator | 7c92f75c0842 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-02 04:38:04.322390 | orchestrator | 8aed539d2b42 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-02 04:38:04.322400 | orchestrator | 4c9737771f60 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-02 04:38:04.322410 | orchestrator | b7b313de836d registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-02 04:38:04.322420 | orchestrator | 276344fe5413 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-02 04:38:04.322429 | orchestrator | 1d8bbb8fe4fe registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-02 04:38:04.695074 | orchestrator | 2026-02-02 04:38:04.695184 | orchestrator | ## Images @ testbed-node-2 2026-02-02 04:38:04.695200 | orchestrator | 2026-02-02 04:38:04.695211 | orchestrator | + echo 2026-02-02 04:38:04.695222 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-02 04:38:04.695233 | orchestrator | + echo 2026-02-02 04:38:04.695244 | orchestrator | + osism container testbed-node-2 images 2026-02-02 04:38:07.110180 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-02 04:38:07.110266 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-02 04:38:07.110274 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-02 04:38:07.110281 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-02 04:38:07.110317 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-02 04:38:07.110323 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-02 04:38:07.110329 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-02 04:38:07.110335 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-02 04:38:07.110340 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-02 04:38:07.110362 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-02 04:38:07.110367 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-02 04:38:07.110376 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-02 04:38:07.110382 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-02 04:38:07.110388 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-02 04:38:07.110394 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-02 04:38:07.110399 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-02 04:38:07.110405 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-02 04:38:07.110410 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-02 04:38:07.110415 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-02 04:38:07.110421 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-02 04:38:07.110426 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-02 04:38:07.110431 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-02 04:38:07.110437 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-02 04:38:07.110442 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-02 04:38:07.110447 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-02 04:38:07.110453 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-02 04:38:07.110458 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-02 04:38:07.110463 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-02 04:38:07.110469 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-02 04:38:07.110474 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-02 04:38:07.110479 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-02 04:38:07.110485 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-02 04:38:07.110502 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-02 04:38:07.110508 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-02 04:38:07.110513 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-02 04:38:07.110519 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-02 04:38:07.110529 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-02 04:38:07.110534 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-02 04:38:07.110540 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-02 04:38:07.110550 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-02 04:38:07.110556 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-02 04:38:07.110561 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-02 04:38:07.110567 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-02 04:38:07.110572 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-02 04:38:07.110578 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-02 04:38:07.110583 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-02 04:38:07.110588 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-02 04:38:07.110594 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-02 04:38:07.110599 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-02 04:38:07.110605 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-02 04:38:07.110610 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-02 04:38:07.110616 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-02 04:38:07.110621 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-02 04:38:07.110626 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-02 04:38:07.110632 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-02 04:38:07.110637 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-02 04:38:07.110642 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-02 04:38:07.110648 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-02 04:38:07.110653 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-02 04:38:07.110659 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-02 04:38:07.110664 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-02 04:38:07.110669 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-02 04:38:07.110679 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-02 04:38:07.110684 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-02 04:38:07.110693 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-02 04:38:07.110699 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-02 04:38:07.110705 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-02 04:38:07.110710 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-02 04:38:07.110719 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-02 04:38:07.110724 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-02-02 04:38:07.465739 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-02 04:38:07.473421 | orchestrator | + set -e 2026-02-02 04:38:07.473511 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 04:38:07.473523 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 04:38:07.473550 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 04:38:07.473558 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 04:38:07.473566 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 04:38:07.473573 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 04:38:07.473582 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 04:38:07.473589 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 04:38:07.473597 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 04:38:07.473604 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-02 04:38:07.473612 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-02 04:38:07.473619 | orchestrator | ++ export ARA=false 2026-02-02 04:38:07.473627 | orchestrator | ++ ARA=false 2026-02-02 04:38:07.473634 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 04:38:07.473641 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 04:38:07.473649 | orchestrator | ++ export TEMPEST=false 2026-02-02 04:38:07.473656 | orchestrator | ++ TEMPEST=false 2026-02-02 04:38:07.473663 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 04:38:07.473729 | orchestrator | ++ IS_ZUUL=true 2026-02-02 04:38:07.473817 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 04:38:07.473827 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 04:38:07.473835 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 04:38:07.473842 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 04:38:07.473849 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 04:38:07.473856 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 04:38:07.473865 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 04:38:07.473872 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 04:38:07.473880 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 04:38:07.473887 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 04:38:07.473894 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-02 04:38:07.473902 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-02 04:38:07.480475 | orchestrator | + set -e 2026-02-02 04:38:07.480791 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 04:38:07.480809 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 04:38:07.480817 | orchestrator | ++ INTERACTIVE=false 2026-02-02 04:38:07.480825 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 04:38:07.480832 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 04:38:07.480839 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-02 04:38:07.481239 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-02 04:38:07.486843 | orchestrator | 2026-02-02 04:38:07.486860 | orchestrator | # Ceph status 2026-02-02 04:38:07.486867 | orchestrator | 2026-02-02 04:38:07.486875 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 04:38:07.486883 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 04:38:07.486890 | orchestrator | + echo 2026-02-02 04:38:07.486898 | orchestrator | + echo '# Ceph status' 2026-02-02 04:38:07.486927 | orchestrator | + echo 2026-02-02 04:38:07.486935 | orchestrator | + ceph -s 2026-02-02 04:38:08.095697 | orchestrator | cluster: 2026-02-02 04:38:08.095770 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-02 04:38:08.095780 | orchestrator | health: HEALTH_OK 2026-02-02 04:38:08.095787 | orchestrator | 2026-02-02 04:38:08.095793 | orchestrator | services: 2026-02-02 04:38:08.095799 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 68m) 2026-02-02 04:38:08.095806 | orchestrator | mgr: testbed-node-2(active, since 55m), standbys: testbed-node-1, testbed-node-0 2026-02-02 04:38:08.095813 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-02 04:38:08.095819 | orchestrator | osd: 6 osds: 6 up (since 64m), 6 in (since 65m) 2026-02-02 04:38:08.095825 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-02 04:38:08.095830 | orchestrator | 2026-02-02 04:38:08.095836 | orchestrator | data: 2026-02-02 04:38:08.095842 | orchestrator | volumes: 1/1 healthy 2026-02-02 04:38:08.095848 | orchestrator | pools: 14 pools, 401 pgs 2026-02-02 04:38:08.095853 | orchestrator | objects: 556 objects, 2.2 GiB 2026-02-02 04:38:08.095859 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-02-02 04:38:08.095865 | orchestrator | pgs: 401 active+clean 2026-02-02 04:38:08.095871 | orchestrator | 2026-02-02 04:38:08.140630 | orchestrator | 2026-02-02 04:38:08.140721 | orchestrator | # Ceph versions 2026-02-02 04:38:08.140733 | orchestrator | 2026-02-02 04:38:08.140741 | orchestrator | + echo 2026-02-02 04:38:08.140749 | orchestrator | + echo '# Ceph versions' 2026-02-02 04:38:08.140757 | orchestrator | + echo 2026-02-02 04:38:08.140764 | orchestrator | + ceph versions 2026-02-02 04:38:08.727601 | orchestrator | { 2026-02-02 04:38:08.727703 | orchestrator | "mon": { 2026-02-02 04:38:08.727721 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-02 04:38:08.727734 | orchestrator | }, 2026-02-02 04:38:08.727746 | orchestrator | "mgr": { 2026-02-02 04:38:08.727758 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-02 04:38:08.727769 | orchestrator | }, 2026-02-02 04:38:08.727780 | orchestrator | "osd": { 2026-02-02 04:38:08.727791 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-02 04:38:08.727802 | orchestrator | }, 2026-02-02 04:38:08.727813 | orchestrator | "mds": { 2026-02-02 04:38:08.727825 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-02 04:38:08.727836 | orchestrator | }, 2026-02-02 04:38:08.727846 | orchestrator | "rgw": { 2026-02-02 04:38:08.727857 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-02 04:38:08.727869 | orchestrator | }, 2026-02-02 04:38:08.727880 | orchestrator | "overall": { 2026-02-02 04:38:08.727891 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-02 04:38:08.727902 | orchestrator | } 2026-02-02 04:38:08.727913 | orchestrator | } 2026-02-02 04:38:08.780573 | orchestrator | 2026-02-02 04:38:08.780677 | orchestrator | # Ceph OSD tree 2026-02-02 04:38:08.780693 | orchestrator | 2026-02-02 04:38:08.780705 | orchestrator | + echo 2026-02-02 04:38:08.780717 | orchestrator | + echo '# Ceph OSD tree' 2026-02-02 04:38:08.780729 | orchestrator | + echo 2026-02-02 04:38:08.780740 | orchestrator | + ceph osd df tree 2026-02-02 04:38:09.250854 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-02 04:38:09.250978 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 385 MiB 113 GiB 5.88 1.00 - root default 2026-02-02 04:38:09.250994 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-02-02 04:38:09.251003 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 1 KiB 62 MiB 18 GiB 7.90 1.34 200 up osd.0 2026-02-02 04:38:09.251012 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 785 MiB 723 MiB 1 KiB 62 MiB 19 GiB 3.83 0.65 190 up osd.4 2026-02-02 04:38:09.251021 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-02-02 04:38:09.251047 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 1 KiB 62 MiB 18 GiB 7.78 1.32 197 up osd.1 2026-02-02 04:38:09.251091 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 809 MiB 748 MiB 1 KiB 62 MiB 19 GiB 3.95 0.67 191 up osd.5 2026-02-02 04:38:09.251101 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-02-02 04:38:09.251111 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 66 MiB 19 GiB 6.28 1.07 192 up osd.2 2026-02-02 04:38:09.251120 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.54 0.94 200 up osd.3 2026-02-02 04:38:09.251129 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 385 MiB 113 GiB 5.88 2026-02-02 04:38:09.251138 | orchestrator | MIN/MAX VAR: 0.65/1.34 STDDEV: 1.63 2026-02-02 04:38:09.293476 | orchestrator | 2026-02-02 04:38:09.293575 | orchestrator | # Ceph monitor status 2026-02-02 04:38:09.293590 | orchestrator | 2026-02-02 04:38:09.293602 | orchestrator | + echo 2026-02-02 04:38:09.293614 | orchestrator | + echo '# Ceph monitor status' 2026-02-02 04:38:09.293625 | orchestrator | + echo 2026-02-02 04:38:09.293636 | orchestrator | + ceph mon stat 2026-02-02 04:38:09.846746 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-02-02 04:38:09.892336 | orchestrator | 2026-02-02 04:38:09.892429 | orchestrator | # Ceph quorum status 2026-02-02 04:38:09.892445 | orchestrator | 2026-02-02 04:38:09.892458 | orchestrator | + echo 2026-02-02 04:38:09.892469 | orchestrator | + echo '# Ceph quorum status' 2026-02-02 04:38:09.892480 | orchestrator | + echo 2026-02-02 04:38:09.892491 | orchestrator | + ceph quorum_status 2026-02-02 04:38:09.892503 | orchestrator | + jq 2026-02-02 04:38:10.509783 | orchestrator | { 2026-02-02 04:38:10.509891 | orchestrator | "election_epoch": 8, 2026-02-02 04:38:10.509909 | orchestrator | "quorum": [ 2026-02-02 04:38:10.509923 | orchestrator | 0, 2026-02-02 04:38:10.509933 | orchestrator | 1, 2026-02-02 04:38:10.509940 | orchestrator | 2 2026-02-02 04:38:10.509947 | orchestrator | ], 2026-02-02 04:38:10.509954 | orchestrator | "quorum_names": [ 2026-02-02 04:38:10.509962 | orchestrator | "testbed-node-0", 2026-02-02 04:38:10.509969 | orchestrator | "testbed-node-1", 2026-02-02 04:38:10.509976 | orchestrator | "testbed-node-2" 2026-02-02 04:38:10.509984 | orchestrator | ], 2026-02-02 04:38:10.509991 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-02-02 04:38:10.509999 | orchestrator | "quorum_age": 4085, 2026-02-02 04:38:10.510007 | orchestrator | "features": { 2026-02-02 04:38:10.510059 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-02 04:38:10.510069 | orchestrator | "quorum_mon": [ 2026-02-02 04:38:10.510076 | orchestrator | "kraken", 2026-02-02 04:38:10.510084 | orchestrator | "luminous", 2026-02-02 04:38:10.510091 | orchestrator | "mimic", 2026-02-02 04:38:10.510098 | orchestrator | "osdmap-prune", 2026-02-02 04:38:10.510105 | orchestrator | "nautilus", 2026-02-02 04:38:10.510113 | orchestrator | "octopus", 2026-02-02 04:38:10.510120 | orchestrator | "pacific", 2026-02-02 04:38:10.510127 | orchestrator | "elector-pinging", 2026-02-02 04:38:10.510134 | orchestrator | "quincy", 2026-02-02 04:38:10.510142 | orchestrator | "reef" 2026-02-02 04:38:10.510149 | orchestrator | ] 2026-02-02 04:38:10.510156 | orchestrator | }, 2026-02-02 04:38:10.510163 | orchestrator | "monmap": { 2026-02-02 04:38:10.510171 | orchestrator | "epoch": 1, 2026-02-02 04:38:10.510178 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-02 04:38:10.510187 | orchestrator | "modified": "2026-02-02T03:29:48.132562Z", 2026-02-02 04:38:10.510195 | orchestrator | "created": "2026-02-02T03:29:48.132562Z", 2026-02-02 04:38:10.510202 | orchestrator | "min_mon_release": 18, 2026-02-02 04:38:10.510209 | orchestrator | "min_mon_release_name": "reef", 2026-02-02 04:38:10.510216 | orchestrator | "election_strategy": 1, 2026-02-02 04:38:10.510223 | orchestrator | "disallowed_leaders: ": "", 2026-02-02 04:38:10.510230 | orchestrator | "stretch_mode": false, 2026-02-02 04:38:10.510238 | orchestrator | "tiebreaker_mon": "", 2026-02-02 04:38:10.510245 | orchestrator | "removed_ranks: ": "", 2026-02-02 04:38:10.510252 | orchestrator | "features": { 2026-02-02 04:38:10.510259 | orchestrator | "persistent": [ 2026-02-02 04:38:10.510266 | orchestrator | "kraken", 2026-02-02 04:38:10.510316 | orchestrator | "luminous", 2026-02-02 04:38:10.510326 | orchestrator | "mimic", 2026-02-02 04:38:10.510334 | orchestrator | "osdmap-prune", 2026-02-02 04:38:10.510342 | orchestrator | "nautilus", 2026-02-02 04:38:10.510350 | orchestrator | "octopus", 2026-02-02 04:38:10.510358 | orchestrator | "pacific", 2026-02-02 04:38:10.510366 | orchestrator | "elector-pinging", 2026-02-02 04:38:10.510375 | orchestrator | "quincy", 2026-02-02 04:38:10.510383 | orchestrator | "reef" 2026-02-02 04:38:10.510392 | orchestrator | ], 2026-02-02 04:38:10.510400 | orchestrator | "optional": [] 2026-02-02 04:38:10.510408 | orchestrator | }, 2026-02-02 04:38:10.510416 | orchestrator | "mons": [ 2026-02-02 04:38:10.510424 | orchestrator | { 2026-02-02 04:38:10.510446 | orchestrator | "rank": 0, 2026-02-02 04:38:10.510455 | orchestrator | "name": "testbed-node-0", 2026-02-02 04:38:10.510463 | orchestrator | "public_addrs": { 2026-02-02 04:38:10.510471 | orchestrator | "addrvec": [ 2026-02-02 04:38:10.510480 | orchestrator | { 2026-02-02 04:38:10.510488 | orchestrator | "type": "v2", 2026-02-02 04:38:10.510497 | orchestrator | "addr": "192.168.16.10:3300", 2026-02-02 04:38:10.510508 | orchestrator | "nonce": 0 2026-02-02 04:38:10.510521 | orchestrator | }, 2026-02-02 04:38:10.510531 | orchestrator | { 2026-02-02 04:38:10.510539 | orchestrator | "type": "v1", 2026-02-02 04:38:10.510548 | orchestrator | "addr": "192.168.16.10:6789", 2026-02-02 04:38:10.510556 | orchestrator | "nonce": 0 2026-02-02 04:38:10.510564 | orchestrator | } 2026-02-02 04:38:10.510573 | orchestrator | ] 2026-02-02 04:38:10.510581 | orchestrator | }, 2026-02-02 04:38:10.510590 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-02-02 04:38:10.510598 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-02-02 04:38:10.510606 | orchestrator | "priority": 0, 2026-02-02 04:38:10.510614 | orchestrator | "weight": 0, 2026-02-02 04:38:10.510622 | orchestrator | "crush_location": "{}" 2026-02-02 04:38:10.510630 | orchestrator | }, 2026-02-02 04:38:10.510638 | orchestrator | { 2026-02-02 04:38:10.510647 | orchestrator | "rank": 1, 2026-02-02 04:38:10.510656 | orchestrator | "name": "testbed-node-1", 2026-02-02 04:38:10.510664 | orchestrator | "public_addrs": { 2026-02-02 04:38:10.510673 | orchestrator | "addrvec": [ 2026-02-02 04:38:10.510681 | orchestrator | { 2026-02-02 04:38:10.510688 | orchestrator | "type": "v2", 2026-02-02 04:38:10.510695 | orchestrator | "addr": "192.168.16.11:3300", 2026-02-02 04:38:10.510702 | orchestrator | "nonce": 0 2026-02-02 04:38:10.510709 | orchestrator | }, 2026-02-02 04:38:10.510716 | orchestrator | { 2026-02-02 04:38:10.510723 | orchestrator | "type": "v1", 2026-02-02 04:38:10.510730 | orchestrator | "addr": "192.168.16.11:6789", 2026-02-02 04:38:10.510877 | orchestrator | "nonce": 0 2026-02-02 04:38:10.510885 | orchestrator | } 2026-02-02 04:38:10.510893 | orchestrator | ] 2026-02-02 04:38:10.510900 | orchestrator | }, 2026-02-02 04:38:10.510907 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-02-02 04:38:10.510914 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-02-02 04:38:10.510921 | orchestrator | "priority": 0, 2026-02-02 04:38:10.510932 | orchestrator | "weight": 0, 2026-02-02 04:38:10.510943 | orchestrator | "crush_location": "{}" 2026-02-02 04:38:10.510951 | orchestrator | }, 2026-02-02 04:38:10.510958 | orchestrator | { 2026-02-02 04:38:10.510965 | orchestrator | "rank": 2, 2026-02-02 04:38:10.510972 | orchestrator | "name": "testbed-node-2", 2026-02-02 04:38:10.510979 | orchestrator | "public_addrs": { 2026-02-02 04:38:10.510986 | orchestrator | "addrvec": [ 2026-02-02 04:38:10.510993 | orchestrator | { 2026-02-02 04:38:10.511000 | orchestrator | "type": "v2", 2026-02-02 04:38:10.511007 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-02 04:38:10.511015 | orchestrator | "nonce": 0 2026-02-02 04:38:10.511022 | orchestrator | }, 2026-02-02 04:38:10.511029 | orchestrator | { 2026-02-02 04:38:10.511036 | orchestrator | "type": "v1", 2026-02-02 04:38:10.511043 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-02 04:38:10.511050 | orchestrator | "nonce": 0 2026-02-02 04:38:10.511057 | orchestrator | } 2026-02-02 04:38:10.511064 | orchestrator | ] 2026-02-02 04:38:10.511071 | orchestrator | }, 2026-02-02 04:38:10.511078 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-02 04:38:10.511085 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-02 04:38:10.511092 | orchestrator | "priority": 0, 2026-02-02 04:38:10.511107 | orchestrator | "weight": 0, 2026-02-02 04:38:10.511114 | orchestrator | "crush_location": "{}" 2026-02-02 04:38:10.511121 | orchestrator | } 2026-02-02 04:38:10.511128 | orchestrator | ] 2026-02-02 04:38:10.511135 | orchestrator | } 2026-02-02 04:38:10.511142 | orchestrator | } 2026-02-02 04:38:10.511160 | orchestrator | 2026-02-02 04:38:10.511168 | orchestrator | # Ceph free space status 2026-02-02 04:38:10.511175 | orchestrator | 2026-02-02 04:38:10.511182 | orchestrator | + echo 2026-02-02 04:38:10.511189 | orchestrator | + echo '# Ceph free space status' 2026-02-02 04:38:10.511196 | orchestrator | + echo 2026-02-02 04:38:10.511203 | orchestrator | + ceph df 2026-02-02 04:38:11.127136 | orchestrator | --- RAW STORAGE --- 2026-02-02 04:38:11.127237 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-02 04:38:11.127267 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-02-02 04:38:11.127348 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-02-02 04:38:11.127361 | orchestrator | 2026-02-02 04:38:11.127372 | orchestrator | --- POOLS --- 2026-02-02 04:38:11.127383 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-02 04:38:11.127395 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-02-02 04:38:11.127406 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-02 04:38:11.127416 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-02 04:38:11.127426 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-02 04:38:11.127437 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-02 04:38:11.127449 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-02 04:38:11.127459 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-02 04:38:11.127469 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-02 04:38:11.127480 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-02-02 04:38:11.127491 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-02 04:38:11.127502 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-02 04:38:11.127513 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.99 35 GiB 2026-02-02 04:38:11.127524 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-02 04:38:11.127534 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-02 04:38:11.170683 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-02 04:38:11.201354 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-02 04:38:11.201433 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-02 04:38:11.201444 | orchestrator | + osism apply facts 2026-02-02 04:38:23.297850 | orchestrator | 2026-02-02 04:38:23 | INFO  | Task 96b38a99-77f5-4f13-bcae-e3b8fcf33d8b (facts) was prepared for execution. 2026-02-02 04:38:23.297951 | orchestrator | 2026-02-02 04:38:23 | INFO  | It takes a moment until task 96b38a99-77f5-4f13-bcae-e3b8fcf33d8b (facts) has been started and output is visible here. 2026-02-02 04:38:37.049516 | orchestrator | 2026-02-02 04:38:37.049610 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-02 04:38:37.049622 | orchestrator | 2026-02-02 04:38:37.049630 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-02 04:38:37.049639 | orchestrator | Monday 02 February 2026 04:38:27 +0000 (0:00:00.283) 0:00:00.283 ******* 2026-02-02 04:38:37.049647 | orchestrator | ok: [testbed-manager] 2026-02-02 04:38:37.049656 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:38:37.049663 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:38:37.049671 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:38:37.049678 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:38:37.049685 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:38:37.049693 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:38:37.049700 | orchestrator | 2026-02-02 04:38:37.049708 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-02 04:38:37.049735 | orchestrator | Monday 02 February 2026 04:38:28 +0000 (0:00:01.169) 0:00:01.453 ******* 2026-02-02 04:38:37.049744 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:38:37.049751 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:38:37.049759 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:38:37.049766 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:38:37.049774 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:38:37.049781 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:38:37.049788 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:38:37.049795 | orchestrator | 2026-02-02 04:38:37.049803 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-02 04:38:37.049810 | orchestrator | 2026-02-02 04:38:37.049818 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 04:38:37.049825 | orchestrator | Monday 02 February 2026 04:38:30 +0000 (0:00:01.472) 0:00:02.925 ******* 2026-02-02 04:38:37.049832 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:38:37.049839 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:38:37.049846 | orchestrator | ok: [testbed-manager] 2026-02-02 04:38:37.049854 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:38:37.049861 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:38:37.049868 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:38:37.049875 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:38:37.049883 | orchestrator | 2026-02-02 04:38:37.049890 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-02 04:38:37.049897 | orchestrator | 2026-02-02 04:38:37.049905 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-02 04:38:37.049913 | orchestrator | Monday 02 February 2026 04:38:35 +0000 (0:00:05.568) 0:00:08.494 ******* 2026-02-02 04:38:37.049920 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:38:37.049927 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:38:37.049934 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:38:37.049941 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:38:37.049949 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:38:37.049956 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:38:37.049963 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:38:37.049970 | orchestrator | 2026-02-02 04:38:37.049978 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:38:37.049985 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:38:37.049994 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:38:37.050001 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:38:37.050138 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:38:37.050151 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:38:37.050161 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:38:37.050169 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:38:37.050178 | orchestrator | 2026-02-02 04:38:37.050188 | orchestrator | 2026-02-02 04:38:37.050197 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:38:37.050205 | orchestrator | Monday 02 February 2026 04:38:36 +0000 (0:00:00.549) 0:00:09.044 ******* 2026-02-02 04:38:37.050214 | orchestrator | =============================================================================== 2026-02-02 04:38:37.050223 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.57s 2026-02-02 04:38:37.050238 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.47s 2026-02-02 04:38:37.050248 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2026-02-02 04:38:37.050257 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-02-02 04:38:37.418500 | orchestrator | + osism validate ceph-mons 2026-02-02 04:39:09.537766 | orchestrator | 2026-02-02 04:39:09.537878 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-02 04:39:09.537896 | orchestrator | 2026-02-02 04:39:09.537910 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-02 04:39:09.537922 | orchestrator | Monday 02 February 2026 04:38:54 +0000 (0:00:00.440) 0:00:00.440 ******* 2026-02-02 04:39:09.537934 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:09.537945 | orchestrator | 2026-02-02 04:39:09.537956 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-02 04:39:09.537967 | orchestrator | Monday 02 February 2026 04:38:54 +0000 (0:00:00.880) 0:00:01.320 ******* 2026-02-02 04:39:09.537978 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:09.537989 | orchestrator | 2026-02-02 04:39:09.538000 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-02 04:39:09.538011 | orchestrator | Monday 02 February 2026 04:38:55 +0000 (0:00:01.024) 0:00:02.345 ******* 2026-02-02 04:39:09.538075 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.538088 | orchestrator | 2026-02-02 04:39:09.538099 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-02 04:39:09.538110 | orchestrator | Monday 02 February 2026 04:38:56 +0000 (0:00:00.135) 0:00:02.480 ******* 2026-02-02 04:39:09.538121 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.538132 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:39:09.538143 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:39:09.538154 | orchestrator | 2026-02-02 04:39:09.538165 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-02 04:39:09.538176 | orchestrator | Monday 02 February 2026 04:38:56 +0000 (0:00:00.322) 0:00:02.802 ******* 2026-02-02 04:39:09.538187 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.538198 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:39:09.538209 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:39:09.538220 | orchestrator | 2026-02-02 04:39:09.538231 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-02 04:39:09.538329 | orchestrator | Monday 02 February 2026 04:38:57 +0000 (0:00:00.988) 0:00:03.791 ******* 2026-02-02 04:39:09.538344 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:09.538358 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:39:09.538371 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:39:09.538385 | orchestrator | 2026-02-02 04:39:09.538398 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-02 04:39:09.538412 | orchestrator | Monday 02 February 2026 04:38:57 +0000 (0:00:00.307) 0:00:04.098 ******* 2026-02-02 04:39:09.538425 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.538438 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:39:09.538451 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:39:09.538464 | orchestrator | 2026-02-02 04:39:09.538522 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-02 04:39:09.538537 | orchestrator | Monday 02 February 2026 04:38:58 +0000 (0:00:00.575) 0:00:04.674 ******* 2026-02-02 04:39:09.538550 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.538564 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:39:09.538576 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:39:09.538589 | orchestrator | 2026-02-02 04:39:09.538602 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-02 04:39:09.538614 | orchestrator | Monday 02 February 2026 04:38:58 +0000 (0:00:00.298) 0:00:04.972 ******* 2026-02-02 04:39:09.538629 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:09.538664 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:39:09.538676 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:39:09.538687 | orchestrator | 2026-02-02 04:39:09.538698 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-02 04:39:09.538709 | orchestrator | Monday 02 February 2026 04:38:58 +0000 (0:00:00.278) 0:00:05.250 ******* 2026-02-02 04:39:09.538720 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.538731 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:39:09.538742 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:39:09.538753 | orchestrator | 2026-02-02 04:39:09.538764 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-02 04:39:09.538775 | orchestrator | Monday 02 February 2026 04:38:59 +0000 (0:00:00.516) 0:00:05.766 ******* 2026-02-02 04:39:09.538786 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:09.538797 | orchestrator | 2026-02-02 04:39:09.538808 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-02 04:39:09.538819 | orchestrator | Monday 02 February 2026 04:38:59 +0000 (0:00:00.256) 0:00:06.023 ******* 2026-02-02 04:39:09.538830 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:09.538841 | orchestrator | 2026-02-02 04:39:09.538852 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-02 04:39:09.538863 | orchestrator | Monday 02 February 2026 04:38:59 +0000 (0:00:00.244) 0:00:06.267 ******* 2026-02-02 04:39:09.538874 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:09.538885 | orchestrator | 2026-02-02 04:39:09.538896 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:39:09.538907 | orchestrator | Monday 02 February 2026 04:39:00 +0000 (0:00:00.255) 0:00:06.523 ******* 2026-02-02 04:39:09.538918 | orchestrator | 2026-02-02 04:39:09.538929 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:39:09.538940 | orchestrator | Monday 02 February 2026 04:39:00 +0000 (0:00:00.070) 0:00:06.593 ******* 2026-02-02 04:39:09.538951 | orchestrator | 2026-02-02 04:39:09.538962 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:39:09.538973 | orchestrator | Monday 02 February 2026 04:39:00 +0000 (0:00:00.072) 0:00:06.665 ******* 2026-02-02 04:39:09.538984 | orchestrator | 2026-02-02 04:39:09.538995 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-02 04:39:09.539005 | orchestrator | Monday 02 February 2026 04:39:00 +0000 (0:00:00.078) 0:00:06.744 ******* 2026-02-02 04:39:09.539016 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:09.539027 | orchestrator | 2026-02-02 04:39:09.539038 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-02 04:39:09.539068 | orchestrator | Monday 02 February 2026 04:39:00 +0000 (0:00:00.254) 0:00:06.998 ******* 2026-02-02 04:39:09.539089 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:09.539115 | orchestrator | 2026-02-02 04:39:09.539164 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-02 04:39:09.539184 | orchestrator | Monday 02 February 2026 04:39:00 +0000 (0:00:00.230) 0:00:07.229 ******* 2026-02-02 04:39:09.539201 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.539220 | orchestrator | 2026-02-02 04:39:09.539314 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-02 04:39:09.539334 | orchestrator | Monday 02 February 2026 04:39:00 +0000 (0:00:00.114) 0:00:07.343 ******* 2026-02-02 04:39:09.539345 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:39:09.539361 | orchestrator | 2026-02-02 04:39:09.539373 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-02 04:39:09.539384 | orchestrator | Monday 02 February 2026 04:39:02 +0000 (0:00:01.529) 0:00:08.873 ******* 2026-02-02 04:39:09.539395 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.539406 | orchestrator | 2026-02-02 04:39:09.539417 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-02 04:39:09.539428 | orchestrator | Monday 02 February 2026 04:39:02 +0000 (0:00:00.534) 0:00:09.407 ******* 2026-02-02 04:39:09.539439 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:09.539462 | orchestrator | 2026-02-02 04:39:09.539474 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-02 04:39:09.539484 | orchestrator | Monday 02 February 2026 04:39:03 +0000 (0:00:00.121) 0:00:09.529 ******* 2026-02-02 04:39:09.539495 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.539506 | orchestrator | 2026-02-02 04:39:09.539517 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-02 04:39:09.539528 | orchestrator | Monday 02 February 2026 04:39:03 +0000 (0:00:00.350) 0:00:09.880 ******* 2026-02-02 04:39:09.539539 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.539550 | orchestrator | 2026-02-02 04:39:09.539561 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-02 04:39:09.539572 | orchestrator | Monday 02 February 2026 04:39:03 +0000 (0:00:00.316) 0:00:10.196 ******* 2026-02-02 04:39:09.539582 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:09.539593 | orchestrator | 2026-02-02 04:39:09.539604 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-02 04:39:09.539615 | orchestrator | Monday 02 February 2026 04:39:03 +0000 (0:00:00.110) 0:00:10.306 ******* 2026-02-02 04:39:09.539626 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.539637 | orchestrator | 2026-02-02 04:39:09.539648 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-02 04:39:09.539659 | orchestrator | Monday 02 February 2026 04:39:04 +0000 (0:00:00.120) 0:00:10.427 ******* 2026-02-02 04:39:09.539670 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.539680 | orchestrator | 2026-02-02 04:39:09.539691 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-02 04:39:09.539702 | orchestrator | Monday 02 February 2026 04:39:04 +0000 (0:00:00.119) 0:00:10.546 ******* 2026-02-02 04:39:09.539713 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:39:09.539724 | orchestrator | 2026-02-02 04:39:09.539735 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-02 04:39:09.539745 | orchestrator | Monday 02 February 2026 04:39:05 +0000 (0:00:01.229) 0:00:11.776 ******* 2026-02-02 04:39:09.539756 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.539767 | orchestrator | 2026-02-02 04:39:09.539778 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-02 04:39:09.539789 | orchestrator | Monday 02 February 2026 04:39:05 +0000 (0:00:00.301) 0:00:12.078 ******* 2026-02-02 04:39:09.539800 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:09.539811 | orchestrator | 2026-02-02 04:39:09.539824 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-02 04:39:09.539843 | orchestrator | Monday 02 February 2026 04:39:05 +0000 (0:00:00.142) 0:00:12.221 ******* 2026-02-02 04:39:09.539862 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:09.539879 | orchestrator | 2026-02-02 04:39:09.539897 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-02 04:39:09.539916 | orchestrator | Monday 02 February 2026 04:39:05 +0000 (0:00:00.141) 0:00:12.363 ******* 2026-02-02 04:39:09.539936 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:09.539955 | orchestrator | 2026-02-02 04:39:09.539974 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-02 04:39:09.539988 | orchestrator | Monday 02 February 2026 04:39:06 +0000 (0:00:00.184) 0:00:12.547 ******* 2026-02-02 04:39:09.540007 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:09.540019 | orchestrator | 2026-02-02 04:39:09.540030 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-02 04:39:09.540041 | orchestrator | Monday 02 February 2026 04:39:06 +0000 (0:00:00.367) 0:00:12.915 ******* 2026-02-02 04:39:09.540052 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:09.540062 | orchestrator | 2026-02-02 04:39:09.540073 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-02 04:39:09.540084 | orchestrator | Monday 02 February 2026 04:39:06 +0000 (0:00:00.269) 0:00:13.185 ******* 2026-02-02 04:39:09.540102 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:09.540113 | orchestrator | 2026-02-02 04:39:09.540124 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-02 04:39:09.540135 | orchestrator | Monday 02 February 2026 04:39:07 +0000 (0:00:00.238) 0:00:13.424 ******* 2026-02-02 04:39:09.540146 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:09.540156 | orchestrator | 2026-02-02 04:39:09.540168 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-02 04:39:09.540178 | orchestrator | Monday 02 February 2026 04:39:08 +0000 (0:00:01.779) 0:00:15.203 ******* 2026-02-02 04:39:09.540189 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:09.540215 | orchestrator | 2026-02-02 04:39:09.540263 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-02 04:39:09.540284 | orchestrator | Monday 02 February 2026 04:39:09 +0000 (0:00:00.273) 0:00:15.477 ******* 2026-02-02 04:39:09.540303 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:09.540320 | orchestrator | 2026-02-02 04:39:09.540352 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:39:12.356633 | orchestrator | Monday 02 February 2026 04:39:09 +0000 (0:00:00.248) 0:00:15.726 ******* 2026-02-02 04:39:12.356761 | orchestrator | 2026-02-02 04:39:12.356776 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:39:12.356787 | orchestrator | Monday 02 February 2026 04:39:09 +0000 (0:00:00.070) 0:00:15.796 ******* 2026-02-02 04:39:12.356796 | orchestrator | 2026-02-02 04:39:12.356808 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:39:12.356818 | orchestrator | Monday 02 February 2026 04:39:09 +0000 (0:00:00.070) 0:00:15.866 ******* 2026-02-02 04:39:12.356828 | orchestrator | 2026-02-02 04:39:12.356838 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-02 04:39:12.356847 | orchestrator | Monday 02 February 2026 04:39:09 +0000 (0:00:00.073) 0:00:15.940 ******* 2026-02-02 04:39:12.356858 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:12.356867 | orchestrator | 2026-02-02 04:39:12.356877 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-02 04:39:12.356887 | orchestrator | Monday 02 February 2026 04:39:11 +0000 (0:00:01.613) 0:00:17.554 ******* 2026-02-02 04:39:12.356896 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-02 04:39:12.356906 | orchestrator |  "msg": [ 2026-02-02 04:39:12.356918 | orchestrator |  "Validator run completed.", 2026-02-02 04:39:12.356928 | orchestrator |  "You can find the report file here:", 2026-02-02 04:39:12.356938 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-02T04:38:54+00:00-report.json", 2026-02-02 04:39:12.356949 | orchestrator |  "on the following host:", 2026-02-02 04:39:12.356959 | orchestrator |  "testbed-manager" 2026-02-02 04:39:12.356968 | orchestrator |  ] 2026-02-02 04:39:12.356979 | orchestrator | } 2026-02-02 04:39:12.356989 | orchestrator | 2026-02-02 04:39:12.356998 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:39:12.357010 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-02 04:39:12.357021 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:39:12.357032 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:39:12.357041 | orchestrator | 2026-02-02 04:39:12.357051 | orchestrator | 2026-02-02 04:39:12.357060 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:39:12.357070 | orchestrator | Monday 02 February 2026 04:39:12 +0000 (0:00:00.866) 0:00:18.420 ******* 2026-02-02 04:39:12.357111 | orchestrator | =============================================================================== 2026-02-02 04:39:12.357121 | orchestrator | Aggregate test results step one ----------------------------------------- 1.78s 2026-02-02 04:39:12.357131 | orchestrator | Write report file ------------------------------------------------------- 1.61s 2026-02-02 04:39:12.357141 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.53s 2026-02-02 04:39:12.357152 | orchestrator | Gather status data ------------------------------------------------------ 1.23s 2026-02-02 04:39:12.357164 | orchestrator | Create report output directory ------------------------------------------ 1.02s 2026-02-02 04:39:12.357175 | orchestrator | Get container info ------------------------------------------------------ 0.99s 2026-02-02 04:39:12.357186 | orchestrator | Get timestamp for report file ------------------------------------------- 0.88s 2026-02-02 04:39:12.357198 | orchestrator | Print report file information ------------------------------------------- 0.87s 2026-02-02 04:39:12.357209 | orchestrator | Set test result to passed if container is existing ---------------------- 0.58s 2026-02-02 04:39:12.357220 | orchestrator | Set quorum test data ---------------------------------------------------- 0.53s 2026-02-02 04:39:12.357276 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.52s 2026-02-02 04:39:12.357288 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.37s 2026-02-02 04:39:12.357300 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.35s 2026-02-02 04:39:12.357311 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2026-02-02 04:39:12.357322 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2026-02-02 04:39:12.357334 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-02-02 04:39:12.357345 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2026-02-02 04:39:12.357357 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-02-02 04:39:12.357368 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.28s 2026-02-02 04:39:12.357379 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-02-02 04:39:12.724365 | orchestrator | + osism validate ceph-mgrs 2026-02-02 04:39:33.910355 | orchestrator | 2026-02-02 04:39:33.910472 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-02 04:39:33.910488 | orchestrator | 2026-02-02 04:39:33.910501 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-02 04:39:33.910514 | orchestrator | Monday 02 February 2026 04:39:19 +0000 (0:00:00.444) 0:00:00.444 ******* 2026-02-02 04:39:33.910525 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:33.910537 | orchestrator | 2026-02-02 04:39:33.910548 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-02 04:39:33.910559 | orchestrator | Monday 02 February 2026 04:39:20 +0000 (0:00:00.852) 0:00:01.296 ******* 2026-02-02 04:39:33.910570 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:33.910581 | orchestrator | 2026-02-02 04:39:33.910592 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-02 04:39:33.910603 | orchestrator | Monday 02 February 2026 04:39:21 +0000 (0:00:00.985) 0:00:02.282 ******* 2026-02-02 04:39:33.910614 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:33.910626 | orchestrator | 2026-02-02 04:39:33.910637 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-02 04:39:33.910648 | orchestrator | Monday 02 February 2026 04:39:21 +0000 (0:00:00.139) 0:00:02.421 ******* 2026-02-02 04:39:33.910659 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:33.910670 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:39:33.910681 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:39:33.910692 | orchestrator | 2026-02-02 04:39:33.910703 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-02 04:39:33.910719 | orchestrator | Monday 02 February 2026 04:39:21 +0000 (0:00:00.352) 0:00:02.774 ******* 2026-02-02 04:39:33.910786 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:33.910800 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:39:33.910811 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:39:33.910822 | orchestrator | 2026-02-02 04:39:33.910833 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-02 04:39:33.910844 | orchestrator | Monday 02 February 2026 04:39:22 +0000 (0:00:01.042) 0:00:03.816 ******* 2026-02-02 04:39:33.910868 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:33.910880 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:39:33.910894 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:39:33.910906 | orchestrator | 2026-02-02 04:39:33.910919 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-02 04:39:33.910932 | orchestrator | Monday 02 February 2026 04:39:23 +0000 (0:00:00.353) 0:00:04.169 ******* 2026-02-02 04:39:33.910945 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:33.910958 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:39:33.910971 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:39:33.910984 | orchestrator | 2026-02-02 04:39:33.910998 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-02 04:39:33.911010 | orchestrator | Monday 02 February 2026 04:39:23 +0000 (0:00:00.550) 0:00:04.720 ******* 2026-02-02 04:39:33.911023 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:33.911035 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:39:33.911047 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:39:33.911060 | orchestrator | 2026-02-02 04:39:33.911072 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-02 04:39:33.911085 | orchestrator | Monday 02 February 2026 04:39:24 +0000 (0:00:00.287) 0:00:05.008 ******* 2026-02-02 04:39:33.911098 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:33.911112 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:39:33.911124 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:39:33.911136 | orchestrator | 2026-02-02 04:39:33.911170 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-02 04:39:33.911184 | orchestrator | Monday 02 February 2026 04:39:24 +0000 (0:00:00.306) 0:00:05.315 ******* 2026-02-02 04:39:33.911197 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:33.911209 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:39:33.911222 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:39:33.911235 | orchestrator | 2026-02-02 04:39:33.911248 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-02 04:39:33.911259 | orchestrator | Monday 02 February 2026 04:39:24 +0000 (0:00:00.491) 0:00:05.806 ******* 2026-02-02 04:39:33.911270 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:33.911281 | orchestrator | 2026-02-02 04:39:33.911291 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-02 04:39:33.911302 | orchestrator | Monday 02 February 2026 04:39:25 +0000 (0:00:00.237) 0:00:06.044 ******* 2026-02-02 04:39:33.911313 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:33.911324 | orchestrator | 2026-02-02 04:39:33.911335 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-02 04:39:33.911346 | orchestrator | Monday 02 February 2026 04:39:25 +0000 (0:00:00.252) 0:00:06.297 ******* 2026-02-02 04:39:33.911357 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:33.911368 | orchestrator | 2026-02-02 04:39:33.911379 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:39:33.911390 | orchestrator | Monday 02 February 2026 04:39:25 +0000 (0:00:00.265) 0:00:06.562 ******* 2026-02-02 04:39:33.911412 | orchestrator | 2026-02-02 04:39:33.911423 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:39:33.911434 | orchestrator | Monday 02 February 2026 04:39:25 +0000 (0:00:00.086) 0:00:06.649 ******* 2026-02-02 04:39:33.911445 | orchestrator | 2026-02-02 04:39:33.911456 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:39:33.911467 | orchestrator | Monday 02 February 2026 04:39:25 +0000 (0:00:00.072) 0:00:06.721 ******* 2026-02-02 04:39:33.911486 | orchestrator | 2026-02-02 04:39:33.911497 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-02 04:39:33.911508 | orchestrator | Monday 02 February 2026 04:39:25 +0000 (0:00:00.072) 0:00:06.794 ******* 2026-02-02 04:39:33.911519 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:33.911530 | orchestrator | 2026-02-02 04:39:33.911541 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-02 04:39:33.911552 | orchestrator | Monday 02 February 2026 04:39:26 +0000 (0:00:00.242) 0:00:07.037 ******* 2026-02-02 04:39:33.911563 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:33.911574 | orchestrator | 2026-02-02 04:39:33.911601 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-02 04:39:33.911613 | orchestrator | Monday 02 February 2026 04:39:26 +0000 (0:00:00.242) 0:00:07.280 ******* 2026-02-02 04:39:33.911624 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:33.911635 | orchestrator | 2026-02-02 04:39:33.911646 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-02 04:39:33.911657 | orchestrator | Monday 02 February 2026 04:39:26 +0000 (0:00:00.116) 0:00:07.396 ******* 2026-02-02 04:39:33.911668 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:39:33.911679 | orchestrator | 2026-02-02 04:39:33.911690 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-02 04:39:33.911701 | orchestrator | Monday 02 February 2026 04:39:28 +0000 (0:00:01.870) 0:00:09.266 ******* 2026-02-02 04:39:33.911712 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:33.911723 | orchestrator | 2026-02-02 04:39:33.911749 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-02 04:39:33.911761 | orchestrator | Monday 02 February 2026 04:39:28 +0000 (0:00:00.432) 0:00:09.699 ******* 2026-02-02 04:39:33.911772 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:33.911783 | orchestrator | 2026-02-02 04:39:33.911794 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-02 04:39:33.911805 | orchestrator | Monday 02 February 2026 04:39:29 +0000 (0:00:00.339) 0:00:10.039 ******* 2026-02-02 04:39:33.911815 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:33.911826 | orchestrator | 2026-02-02 04:39:33.911837 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-02 04:39:33.911848 | orchestrator | Monday 02 February 2026 04:39:29 +0000 (0:00:00.148) 0:00:10.187 ******* 2026-02-02 04:39:33.911859 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:39:33.911870 | orchestrator | 2026-02-02 04:39:33.911880 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-02 04:39:33.911891 | orchestrator | Monday 02 February 2026 04:39:29 +0000 (0:00:00.148) 0:00:10.336 ******* 2026-02-02 04:39:33.911902 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:33.911913 | orchestrator | 2026-02-02 04:39:33.911924 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-02 04:39:33.911935 | orchestrator | Monday 02 February 2026 04:39:29 +0000 (0:00:00.245) 0:00:10.581 ******* 2026-02-02 04:39:33.911945 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:39:33.911956 | orchestrator | 2026-02-02 04:39:33.911967 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-02 04:39:33.911978 | orchestrator | Monday 02 February 2026 04:39:29 +0000 (0:00:00.251) 0:00:10.832 ******* 2026-02-02 04:39:33.911989 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:33.912000 | orchestrator | 2026-02-02 04:39:33.912011 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-02 04:39:33.912022 | orchestrator | Monday 02 February 2026 04:39:31 +0000 (0:00:01.315) 0:00:12.147 ******* 2026-02-02 04:39:33.912033 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:33.912043 | orchestrator | 2026-02-02 04:39:33.912054 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-02 04:39:33.912065 | orchestrator | Monday 02 February 2026 04:39:31 +0000 (0:00:00.252) 0:00:12.401 ******* 2026-02-02 04:39:33.912082 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:33.912093 | orchestrator | 2026-02-02 04:39:33.912104 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:39:33.912115 | orchestrator | Monday 02 February 2026 04:39:31 +0000 (0:00:00.262) 0:00:12.663 ******* 2026-02-02 04:39:33.912126 | orchestrator | 2026-02-02 04:39:33.912137 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:39:33.912170 | orchestrator | Monday 02 February 2026 04:39:31 +0000 (0:00:00.071) 0:00:12.734 ******* 2026-02-02 04:39:33.912181 | orchestrator | 2026-02-02 04:39:33.912192 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:39:33.912203 | orchestrator | Monday 02 February 2026 04:39:31 +0000 (0:00:00.071) 0:00:12.806 ******* 2026-02-02 04:39:33.912214 | orchestrator | 2026-02-02 04:39:33.912224 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-02 04:39:33.912235 | orchestrator | Monday 02 February 2026 04:39:32 +0000 (0:00:00.294) 0:00:13.100 ******* 2026-02-02 04:39:33.912246 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:33.912257 | orchestrator | 2026-02-02 04:39:33.912268 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-02 04:39:33.912279 | orchestrator | Monday 02 February 2026 04:39:33 +0000 (0:00:01.362) 0:00:14.462 ******* 2026-02-02 04:39:33.912289 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-02 04:39:33.912301 | orchestrator |  "msg": [ 2026-02-02 04:39:33.912312 | orchestrator |  "Validator run completed.", 2026-02-02 04:39:33.912328 | orchestrator |  "You can find the report file here:", 2026-02-02 04:39:33.912339 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-02T04:39:20+00:00-report.json", 2026-02-02 04:39:33.912351 | orchestrator |  "on the following host:", 2026-02-02 04:39:33.912362 | orchestrator |  "testbed-manager" 2026-02-02 04:39:33.912373 | orchestrator |  ] 2026-02-02 04:39:33.912385 | orchestrator | } 2026-02-02 04:39:33.912396 | orchestrator | 2026-02-02 04:39:33.912407 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:39:33.912419 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-02 04:39:33.912431 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:39:33.912450 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:39:34.315580 | orchestrator | 2026-02-02 04:39:34.315658 | orchestrator | 2026-02-02 04:39:34.315668 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:39:34.315676 | orchestrator | Monday 02 February 2026 04:39:33 +0000 (0:00:00.407) 0:00:14.870 ******* 2026-02-02 04:39:34.315683 | orchestrator | =============================================================================== 2026-02-02 04:39:34.315689 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.87s 2026-02-02 04:39:34.315696 | orchestrator | Write report file ------------------------------------------------------- 1.36s 2026-02-02 04:39:34.315703 | orchestrator | Aggregate test results step one ----------------------------------------- 1.32s 2026-02-02 04:39:34.315709 | orchestrator | Get container info ------------------------------------------------------ 1.04s 2026-02-02 04:39:34.315715 | orchestrator | Create report output directory ------------------------------------------ 0.99s 2026-02-02 04:39:34.315722 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-02-02 04:39:34.315728 | orchestrator | Set test result to passed if container is existing ---------------------- 0.55s 2026-02-02 04:39:34.315734 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.49s 2026-02-02 04:39:34.315762 | orchestrator | Flush handlers ---------------------------------------------------------- 0.44s 2026-02-02 04:39:34.315769 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.43s 2026-02-02 04:39:34.315775 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-02-02 04:39:34.315781 | orchestrator | Set test result to failed if container is missing ----------------------- 0.35s 2026-02-02 04:39:34.315787 | orchestrator | Prepare test data for container existance test -------------------------- 0.35s 2026-02-02 04:39:34.315793 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.34s 2026-02-02 04:39:34.315800 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2026-02-02 04:39:34.315806 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-02-02 04:39:34.315812 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2026-02-02 04:39:34.315818 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-02-02 04:39:34.315824 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2026-02-02 04:39:34.315830 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2026-02-02 04:39:34.659119 | orchestrator | + osism validate ceph-osds 2026-02-02 04:39:55.945834 | orchestrator | 2026-02-02 04:39:55.945947 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-02 04:39:55.945964 | orchestrator | 2026-02-02 04:39:55.945976 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-02 04:39:55.945988 | orchestrator | Monday 02 February 2026 04:39:51 +0000 (0:00:00.448) 0:00:00.448 ******* 2026-02-02 04:39:55.945999 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:55.946011 | orchestrator | 2026-02-02 04:39:55.946138 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-02 04:39:55.946151 | orchestrator | Monday 02 February 2026 04:39:52 +0000 (0:00:00.843) 0:00:01.292 ******* 2026-02-02 04:39:55.946163 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:55.946174 | orchestrator | 2026-02-02 04:39:55.946185 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-02 04:39:55.946197 | orchestrator | Monday 02 February 2026 04:39:52 +0000 (0:00:00.531) 0:00:01.823 ******* 2026-02-02 04:39:55.946208 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 04:39:55.946219 | orchestrator | 2026-02-02 04:39:55.946231 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-02 04:39:55.946242 | orchestrator | Monday 02 February 2026 04:39:53 +0000 (0:00:00.730) 0:00:02.553 ******* 2026-02-02 04:39:55.946253 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:39:55.946266 | orchestrator | 2026-02-02 04:39:55.946278 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-02 04:39:55.946289 | orchestrator | Monday 02 February 2026 04:39:53 +0000 (0:00:00.121) 0:00:02.675 ******* 2026-02-02 04:39:55.946301 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:39:55.946312 | orchestrator | 2026-02-02 04:39:55.946323 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-02 04:39:55.946334 | orchestrator | Monday 02 February 2026 04:39:53 +0000 (0:00:00.112) 0:00:02.788 ******* 2026-02-02 04:39:55.946345 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:39:55.946356 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:39:55.946367 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:39:55.946379 | orchestrator | 2026-02-02 04:39:55.946408 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-02 04:39:55.946422 | orchestrator | Monday 02 February 2026 04:39:54 +0000 (0:00:00.311) 0:00:03.099 ******* 2026-02-02 04:39:55.946435 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:39:55.946448 | orchestrator | 2026-02-02 04:39:55.946460 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-02 04:39:55.946495 | orchestrator | Monday 02 February 2026 04:39:54 +0000 (0:00:00.135) 0:00:03.235 ******* 2026-02-02 04:39:55.946507 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:39:55.946518 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:39:55.946529 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:39:55.946540 | orchestrator | 2026-02-02 04:39:55.946551 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-02 04:39:55.946563 | orchestrator | Monday 02 February 2026 04:39:54 +0000 (0:00:00.306) 0:00:03.541 ******* 2026-02-02 04:39:55.946573 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:39:55.946584 | orchestrator | 2026-02-02 04:39:55.946596 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-02 04:39:55.946607 | orchestrator | Monday 02 February 2026 04:39:55 +0000 (0:00:00.826) 0:00:04.367 ******* 2026-02-02 04:39:55.946618 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:39:55.946629 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:39:55.946640 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:39:55.946651 | orchestrator | 2026-02-02 04:39:55.946662 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-02 04:39:55.946673 | orchestrator | Monday 02 February 2026 04:39:55 +0000 (0:00:00.311) 0:00:04.679 ******* 2026-02-02 04:39:55.946687 | orchestrator | skipping: [testbed-node-3] => (item={'id': '76dfef21fd86afd128810e7d97c3fa9e0a1e2ed8d357f50eac64612eef8d9cf7', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-02 04:39:55.946702 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8565060a074d55be6cc01aa7c950da07488729ded99f05f49ead60c9a0e39e7c', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-02 04:39:55.946715 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2fe8531cfc3bb9c66ef235d8815d54756e5871fd8ae96cc4d869384c04ddc46e', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-02 04:39:55.946727 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e08a0f0050418350cb24257f37420183cc2e6147670f2a6388853513bbecc340', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-02-02 04:39:55.946738 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c3a8cd9081d47f83eb22d2cf0c5aa157f053b5ec4df6655bd6c6dbc7b880cb94', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-02 04:39:55.946774 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'da37655fe87e2da87549ed0cf0be288cd88d215587cc3a9db1d59bfd9bf3fc30', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-02 04:39:55.946786 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8b1c91601b604b2a2cac29d06b73610d6bbf19b39454806961c3f24f16477910', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-02 04:39:55.946798 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5d8ee6867f67921219fb8221964004f4d610160b2fbe466feae8a3a2a198839f', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-02-02 04:39:55.946810 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5242d1c6095379fc1050c99608bfd7f5ce9095edf5e466af2df5a33eb3789c32', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-02 04:39:55.946831 | orchestrator | skipping: [testbed-node-3] => (item={'id': '13fa5ea8b750cec8b0d1a4c44cd07747829e23f69c8842186cfc068b8ced3a9a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-02 04:39:55.946843 | orchestrator | skipping: [testbed-node-3] => (item={'id': '59fef153eeea6055cf6acd4f9d5d1ab0c4f05ceba746ee72b436a9364cc82d1c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-02 04:39:55.946857 | orchestrator | ok: [testbed-node-3] => (item={'id': 'a933948c7868737a33954c6b9f103a4227487364f83e126ade7ba3befbec657d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-02 04:39:55.946869 | orchestrator | ok: [testbed-node-3] => (item={'id': '305e787f3bf56c0b1e6dce9c0d4bf30c21029b67b9ca510c7170634cb7c25e9f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-02 04:39:55.946880 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4191aba92ce5c83db46af3e30e414c1838627bade8ab2ce07380ee690c058db9', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-02 04:39:55.946892 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e329e020bade91a6e977d0067b13e005e1d84397bf7ca6161bcd920540f13009', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-02 04:39:55.946904 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd9fa0c8291e9f91efefabc9be5582fc16040575055b056e907a3c9cf036c159d', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-02 04:39:55.946915 | orchestrator | skipping: [testbed-node-3] => (item={'id': '806a5e9e00f79e76691c9fcea8c0bcd49dea8191a126ff4be237e7329c53007b', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-02 04:39:55.946927 | orchestrator | skipping: [testbed-node-3] => (item={'id': '242afe10bb1bc54dd03347ed5f6276fe3e089879565c410592ed823285c0709c', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-02 04:39:55.946939 | orchestrator | skipping: [testbed-node-3] => (item={'id': '37075cce3e3f47ae78b7841b707de4d46f0bf18eef3779f1d1f294787ed565e6', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-02 04:39:55.946950 | orchestrator | skipping: [testbed-node-4] => (item={'id': '81cfd460ac8990f5e345739de5de18cf7e83beab14cb743deb30fad0db0797ca', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-02 04:39:55.946969 | orchestrator | skipping: [testbed-node-4] => (item={'id': '79ca2dcfac59a6891bfbcbb5482e5a22c5ee5b2ebe3b80f8ca9162ea3b72920a', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-02 04:39:56.237466 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2f3f1f217f80abb7943b7cf42bebfb1d6aed36fcaf7003c1a292bb5b52b1072c', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-02 04:39:56.237616 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e933100b2b88f9cc4eeda8048d88605b62624715f6ddfbaaafe21810d629a9c3', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-02-02 04:39:56.237657 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f626b48308f787943199080f38f0b6d4eba593c8e3cfc2a28c3a9cd798f984b8', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-02 04:39:56.237672 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e05208c8806432043349cd30b3f7bcee582094cbcff5cc74aef2fd8b491725a6', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-02 04:39:56.237689 | orchestrator | skipping: [testbed-node-4] => (item={'id': '99350eee88e546e078a620a8d87c228b89183b2d544708d178c682ea66cdf8d1', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-02 04:39:56.237701 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eeceb7dda3db74250f6659c3b468a1be8041e5e83ea00522f2319fc07053bb3e', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-02-02 04:39:56.237712 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5bcc440a2a755426f44378b85e3932434d145fddcf2d8e655daaa791251249cf', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-02 04:39:56.237725 | orchestrator | skipping: [testbed-node-4] => (item={'id': '065b69d9e58f172a15d2785bdbd7d8bb18122f2da52750db1d29302e361691af', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-02 04:39:56.237736 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0c83b9abb99d60807a30af4ef043a0834872a475997f2af3ba6107aa0fb17fca', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-02 04:39:56.237750 | orchestrator | ok: [testbed-node-4] => (item={'id': 'b3ee01cc68716642ab9b8ee3473cb33e1824e30deeb8f187d163945cdb14ec45', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-02 04:39:56.237761 | orchestrator | ok: [testbed-node-4] => (item={'id': 'eaba8f65dbc744f95e32e90988a86840c98ea0e15a3f9941b6d19e4874b5dd00', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-02 04:39:56.237773 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3c9c84e9e10bc6ecf3b32b80def3054f4cc3e2639aebcd26cc7bdb2ee214cc8e', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-02 04:39:56.237784 | orchestrator | skipping: [testbed-node-4] => (item={'id': '506057bb61b0f11f2365dcfd10ff03f4fdcfcbbd0b3491eeb6901319a3ca7719', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-02 04:39:56.237795 | orchestrator | skipping: [testbed-node-4] => (item={'id': '030c5f12702bf014fbb4deaf0fd9bc19d40207e1d4c1b862a658da94376a486a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-02 04:39:56.237824 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c0b24d9376874ae428b841ba7d7778282a367bb247c10db6c4baf466368f2533', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-02 04:39:56.237845 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4993334d2c7a3a71a191cf9c6333da90783582b89001738094072690eaef5e5c', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-02 04:39:56.237857 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0911ba36c78363f4cabc5d6468b27c17baa2cc4d26f49bcfcf18f2b7cb4ba887', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-02 04:39:56.237869 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b515683e7e2d965ebc2edfa07eceaf4fe152553455a8d99333697ede48b36f8f', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-02 04:39:56.237880 | orchestrator | skipping: [testbed-node-5] => (item={'id': '33476a3d8bed7f85a775b6e33bb7639cbf9ee72a0fb1c02bdb5afc7d0f97041d', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-02 04:39:56.237896 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2b52d5c307bf8dd2ab05a8a595f69f63c3c679ac973192b314c0836506cbf612', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-02 04:39:56.237908 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c78ee9020f585bf3030f1eb7ee85ced28cf6e35365327d16ee3ab995953b915a', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-02-02 04:39:56.237919 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c3620dacb09a6f0ab66d2c4ea59466789a35fb36f09cda6d7d2cfe81c6393bf4', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-02 04:39:56.237931 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fd4b4db8459af4fe002b1aaf8d4d5f5db3c4806c6032aec9e13d96384b1313a4', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-02 04:39:56.237942 | orchestrator | skipping: [testbed-node-5] => (item={'id': '026ed42bcec95b77a2702f9f818eeb11767f976b256c0eb8299ed9dbbe917161', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-02 04:39:56.237953 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c5847fbed40316074efc330e49b284326253a1b90619d4dd7b7dc87a6b69acde', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-02-02 04:39:56.237965 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7bbff660593d554b0c9e9268106059fcf7f53f08f6b28526ed1a2a038f73cf48', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-02 04:39:56.237976 | orchestrator | skipping: [testbed-node-5] => (item={'id': '898b850f4efab8bd68acf05cda320a3e3237dc4e6633f9623e5890f23b14f326', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-02 04:39:56.237987 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8c978e7454d56dcc9ccc38a4d9410911ff1538934098bcc9388b76c7bd153796', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-02 04:39:56.238006 | orchestrator | ok: [testbed-node-5] => (item={'id': '46b19c9a2621d698ab9588acf0e17a7ce0de7a427f48baed6d62bbee5c936992', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-02 04:39:56.238118 | orchestrator | ok: [testbed-node-5] => (item={'id': 'a43f4864232ffd7e6529d3a8b3c00502abb498569c435472a4eaccc909c1cd40', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-02 04:40:07.255641 | orchestrator | skipping: [testbed-node-5] => (item={'id': '64d88adaaf37acae4f5db60f63256b495baedba12e0d231906733949f52cf23d', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-02 04:40:07.255733 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7f5eb550f6d699c6a758ec59d661a438f711e0e58a8805d8ccbc80be6cf0a3cd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-02 04:40:07.255746 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0c7de4c456da6a86e4a9d7978d3bb27034b09a84814502a61951d9e0a59b7015', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-02 04:40:07.255755 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dfe288edf547dc1e41bd83703c65a9a39fe2097045c0e913a8d1a03d7ed1ebb3', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-02 04:40:07.255776 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7ecddeba4f5755bda692da5b2228a0104f746a8327b9a04784778e06bdc9cb61', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-02 04:40:07.255784 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'aaae749c49913d8fc9bd74091fe0a066d96b30b556ac7e09859bb97a3880d050', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-02 04:40:07.255792 | orchestrator | 2026-02-02 04:40:07.255800 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-02 04:40:07.255808 | orchestrator | Monday 02 February 2026 04:39:56 +0000 (0:00:00.550) 0:00:05.229 ******* 2026-02-02 04:40:07.255815 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:07.255823 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:07.255829 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:07.255836 | orchestrator | 2026-02-02 04:40:07.255843 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-02 04:40:07.255850 | orchestrator | Monday 02 February 2026 04:39:56 +0000 (0:00:00.292) 0:00:05.522 ******* 2026-02-02 04:40:07.255857 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:07.255865 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:40:07.255872 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:40:07.255879 | orchestrator | 2026-02-02 04:40:07.255886 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-02 04:40:07.255892 | orchestrator | Monday 02 February 2026 04:39:56 +0000 (0:00:00.483) 0:00:06.005 ******* 2026-02-02 04:40:07.255899 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:07.255906 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:07.255913 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:07.255920 | orchestrator | 2026-02-02 04:40:07.255927 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-02 04:40:07.255933 | orchestrator | Monday 02 February 2026 04:39:57 +0000 (0:00:00.325) 0:00:06.330 ******* 2026-02-02 04:40:07.255940 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:07.255947 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:07.255954 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:07.255977 | orchestrator | 2026-02-02 04:40:07.255984 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-02 04:40:07.255991 | orchestrator | Monday 02 February 2026 04:39:57 +0000 (0:00:00.291) 0:00:06.622 ******* 2026-02-02 04:40:07.255998 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-02 04:40:07.256006 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-02 04:40:07.256073 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:07.256080 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-02 04:40:07.256087 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-02 04:40:07.256094 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:40:07.256100 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-02 04:40:07.256107 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-02 04:40:07.256114 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:40:07.256120 | orchestrator | 2026-02-02 04:40:07.256127 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-02 04:40:07.256134 | orchestrator | Monday 02 February 2026 04:39:57 +0000 (0:00:00.310) 0:00:06.932 ******* 2026-02-02 04:40:07.256141 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:07.256148 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:07.256154 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:07.256161 | orchestrator | 2026-02-02 04:40:07.256167 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-02 04:40:07.256174 | orchestrator | Monday 02 February 2026 04:39:58 +0000 (0:00:00.520) 0:00:07.453 ******* 2026-02-02 04:40:07.256181 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:07.256201 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:40:07.256210 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:40:07.256218 | orchestrator | 2026-02-02 04:40:07.256226 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-02 04:40:07.256234 | orchestrator | Monday 02 February 2026 04:39:58 +0000 (0:00:00.277) 0:00:07.731 ******* 2026-02-02 04:40:07.256242 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:07.256250 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:40:07.256258 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:40:07.256266 | orchestrator | 2026-02-02 04:40:07.256274 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-02 04:40:07.256282 | orchestrator | Monday 02 February 2026 04:39:59 +0000 (0:00:00.281) 0:00:08.013 ******* 2026-02-02 04:40:07.256290 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:07.256298 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:07.256307 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:07.256315 | orchestrator | 2026-02-02 04:40:07.256323 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-02 04:40:07.256331 | orchestrator | Monday 02 February 2026 04:39:59 +0000 (0:00:00.496) 0:00:08.510 ******* 2026-02-02 04:40:07.256338 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:07.256346 | orchestrator | 2026-02-02 04:40:07.256354 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-02 04:40:07.256362 | orchestrator | Monday 02 February 2026 04:39:59 +0000 (0:00:00.256) 0:00:08.766 ******* 2026-02-02 04:40:07.256370 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:07.256378 | orchestrator | 2026-02-02 04:40:07.256386 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-02 04:40:07.256395 | orchestrator | Monday 02 February 2026 04:40:00 +0000 (0:00:00.249) 0:00:09.015 ******* 2026-02-02 04:40:07.256403 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:07.256410 | orchestrator | 2026-02-02 04:40:07.256419 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:40:07.256434 | orchestrator | Monday 02 February 2026 04:40:00 +0000 (0:00:00.250) 0:00:09.266 ******* 2026-02-02 04:40:07.256442 | orchestrator | 2026-02-02 04:40:07.256450 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:40:07.256458 | orchestrator | Monday 02 February 2026 04:40:00 +0000 (0:00:00.070) 0:00:09.336 ******* 2026-02-02 04:40:07.256464 | orchestrator | 2026-02-02 04:40:07.256471 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:40:07.256478 | orchestrator | Monday 02 February 2026 04:40:00 +0000 (0:00:00.072) 0:00:09.408 ******* 2026-02-02 04:40:07.256484 | orchestrator | 2026-02-02 04:40:07.256491 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-02 04:40:07.256498 | orchestrator | Monday 02 February 2026 04:40:00 +0000 (0:00:00.075) 0:00:09.484 ******* 2026-02-02 04:40:07.256504 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:07.256511 | orchestrator | 2026-02-02 04:40:07.256517 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-02 04:40:07.256524 | orchestrator | Monday 02 February 2026 04:40:00 +0000 (0:00:00.271) 0:00:09.755 ******* 2026-02-02 04:40:07.256531 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:07.256537 | orchestrator | 2026-02-02 04:40:07.256544 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-02 04:40:07.256551 | orchestrator | Monday 02 February 2026 04:40:00 +0000 (0:00:00.250) 0:00:10.005 ******* 2026-02-02 04:40:07.256558 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:07.256564 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:07.256571 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:07.256578 | orchestrator | 2026-02-02 04:40:07.256584 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-02 04:40:07.256591 | orchestrator | Monday 02 February 2026 04:40:01 +0000 (0:00:00.321) 0:00:10.326 ******* 2026-02-02 04:40:07.256598 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:07.256604 | orchestrator | 2026-02-02 04:40:07.256611 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-02 04:40:07.256618 | orchestrator | Monday 02 February 2026 04:40:02 +0000 (0:00:00.714) 0:00:11.041 ******* 2026-02-02 04:40:07.256625 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 04:40:07.256631 | orchestrator | 2026-02-02 04:40:07.256638 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-02 04:40:07.256645 | orchestrator | Monday 02 February 2026 04:40:03 +0000 (0:00:01.569) 0:00:12.611 ******* 2026-02-02 04:40:07.256651 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:07.256658 | orchestrator | 2026-02-02 04:40:07.256664 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-02 04:40:07.256671 | orchestrator | Monday 02 February 2026 04:40:03 +0000 (0:00:00.181) 0:00:12.792 ******* 2026-02-02 04:40:07.256678 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:07.256684 | orchestrator | 2026-02-02 04:40:07.256691 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-02 04:40:07.256698 | orchestrator | Monday 02 February 2026 04:40:04 +0000 (0:00:00.331) 0:00:13.124 ******* 2026-02-02 04:40:07.256704 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:07.256711 | orchestrator | 2026-02-02 04:40:07.256718 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-02 04:40:07.256724 | orchestrator | Monday 02 February 2026 04:40:04 +0000 (0:00:00.126) 0:00:13.250 ******* 2026-02-02 04:40:07.256731 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:07.256738 | orchestrator | 2026-02-02 04:40:07.256744 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-02 04:40:07.256751 | orchestrator | Monday 02 February 2026 04:40:04 +0000 (0:00:00.155) 0:00:13.406 ******* 2026-02-02 04:40:07.256758 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:07.256764 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:07.256771 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:07.256782 | orchestrator | 2026-02-02 04:40:07.256789 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-02 04:40:07.256796 | orchestrator | Monday 02 February 2026 04:40:04 +0000 (0:00:00.299) 0:00:13.705 ******* 2026-02-02 04:40:07.256803 | orchestrator | changed: [testbed-node-3] 2026-02-02 04:40:07.256809 | orchestrator | changed: [testbed-node-4] 2026-02-02 04:40:07.256816 | orchestrator | changed: [testbed-node-5] 2026-02-02 04:40:17.561353 | orchestrator | 2026-02-02 04:40:17.561464 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-02 04:40:17.561485 | orchestrator | Monday 02 February 2026 04:40:07 +0000 (0:00:02.544) 0:00:16.250 ******* 2026-02-02 04:40:17.561500 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:17.561515 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:17.561529 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:17.561544 | orchestrator | 2026-02-02 04:40:17.561554 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-02 04:40:17.561562 | orchestrator | Monday 02 February 2026 04:40:07 +0000 (0:00:00.318) 0:00:16.568 ******* 2026-02-02 04:40:17.561570 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:17.561579 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:17.561587 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:17.561595 | orchestrator | 2026-02-02 04:40:17.561603 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-02 04:40:17.561611 | orchestrator | Monday 02 February 2026 04:40:08 +0000 (0:00:00.483) 0:00:17.051 ******* 2026-02-02 04:40:17.561622 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:17.561637 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:40:17.561648 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:40:17.561660 | orchestrator | 2026-02-02 04:40:17.561671 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-02 04:40:17.561683 | orchestrator | Monday 02 February 2026 04:40:08 +0000 (0:00:00.283) 0:00:17.335 ******* 2026-02-02 04:40:17.561696 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:17.561708 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:17.561720 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:17.561733 | orchestrator | 2026-02-02 04:40:17.561746 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-02 04:40:17.561764 | orchestrator | Monday 02 February 2026 04:40:08 +0000 (0:00:00.548) 0:00:17.883 ******* 2026-02-02 04:40:17.561777 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:17.561789 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:40:17.561802 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:40:17.561814 | orchestrator | 2026-02-02 04:40:17.561828 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-02 04:40:17.561842 | orchestrator | Monday 02 February 2026 04:40:09 +0000 (0:00:00.290) 0:00:18.174 ******* 2026-02-02 04:40:17.561855 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:17.561869 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:40:17.561882 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:40:17.561896 | orchestrator | 2026-02-02 04:40:17.561906 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-02 04:40:17.561915 | orchestrator | Monday 02 February 2026 04:40:09 +0000 (0:00:00.282) 0:00:18.456 ******* 2026-02-02 04:40:17.561928 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:17.561943 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:17.561957 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:17.561995 | orchestrator | 2026-02-02 04:40:17.562011 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-02 04:40:17.562081 | orchestrator | Monday 02 February 2026 04:40:09 +0000 (0:00:00.529) 0:00:18.986 ******* 2026-02-02 04:40:17.562095 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:17.562111 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:17.562125 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:17.562140 | orchestrator | 2026-02-02 04:40:17.562155 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-02 04:40:17.562193 | orchestrator | Monday 02 February 2026 04:40:10 +0000 (0:00:00.732) 0:00:19.719 ******* 2026-02-02 04:40:17.562204 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:17.562213 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:17.562222 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:17.562231 | orchestrator | 2026-02-02 04:40:17.562241 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-02 04:40:17.562251 | orchestrator | Monday 02 February 2026 04:40:11 +0000 (0:00:00.295) 0:00:20.014 ******* 2026-02-02 04:40:17.563086 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:17.563114 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:40:17.563122 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:40:17.563130 | orchestrator | 2026-02-02 04:40:17.563139 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-02 04:40:17.563148 | orchestrator | Monday 02 February 2026 04:40:11 +0000 (0:00:00.292) 0:00:20.306 ******* 2026-02-02 04:40:17.563155 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:40:17.563163 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:40:17.563171 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:40:17.563180 | orchestrator | 2026-02-02 04:40:17.563188 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-02 04:40:17.563196 | orchestrator | Monday 02 February 2026 04:40:11 +0000 (0:00:00.515) 0:00:20.822 ******* 2026-02-02 04:40:17.563204 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 04:40:17.563212 | orchestrator | 2026-02-02 04:40:17.563220 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-02 04:40:17.563227 | orchestrator | Monday 02 February 2026 04:40:12 +0000 (0:00:00.273) 0:00:21.095 ******* 2026-02-02 04:40:17.563235 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:40:17.563243 | orchestrator | 2026-02-02 04:40:17.563251 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-02 04:40:17.563259 | orchestrator | Monday 02 February 2026 04:40:12 +0000 (0:00:00.251) 0:00:21.347 ******* 2026-02-02 04:40:17.563267 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 04:40:17.563275 | orchestrator | 2026-02-02 04:40:17.563283 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-02 04:40:17.563291 | orchestrator | Monday 02 February 2026 04:40:14 +0000 (0:00:01.685) 0:00:23.032 ******* 2026-02-02 04:40:17.563299 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 04:40:17.563307 | orchestrator | 2026-02-02 04:40:17.563315 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-02 04:40:17.563323 | orchestrator | Monday 02 February 2026 04:40:14 +0000 (0:00:00.257) 0:00:23.289 ******* 2026-02-02 04:40:17.563331 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 04:40:17.563339 | orchestrator | 2026-02-02 04:40:17.563368 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:40:17.563377 | orchestrator | Monday 02 February 2026 04:40:14 +0000 (0:00:00.264) 0:00:23.553 ******* 2026-02-02 04:40:17.563384 | orchestrator | 2026-02-02 04:40:17.563393 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:40:17.563400 | orchestrator | Monday 02 February 2026 04:40:14 +0000 (0:00:00.085) 0:00:23.639 ******* 2026-02-02 04:40:17.563408 | orchestrator | 2026-02-02 04:40:17.563416 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-02 04:40:17.563424 | orchestrator | Monday 02 February 2026 04:40:14 +0000 (0:00:00.070) 0:00:23.710 ******* 2026-02-02 04:40:17.563432 | orchestrator | 2026-02-02 04:40:17.563440 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-02 04:40:17.563448 | orchestrator | Monday 02 February 2026 04:40:14 +0000 (0:00:00.073) 0:00:23.783 ******* 2026-02-02 04:40:17.563455 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-02 04:40:17.563463 | orchestrator | 2026-02-02 04:40:17.563471 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-02 04:40:17.563492 | orchestrator | Monday 02 February 2026 04:40:16 +0000 (0:00:01.551) 0:00:25.334 ******* 2026-02-02 04:40:17.563500 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-02 04:40:17.563508 | orchestrator |  "msg": [ 2026-02-02 04:40:17.563516 | orchestrator |  "Validator run completed.", 2026-02-02 04:40:17.563524 | orchestrator |  "You can find the report file here:", 2026-02-02 04:40:17.563532 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-02T04:39:52+00:00-report.json", 2026-02-02 04:40:17.563549 | orchestrator |  "on the following host:", 2026-02-02 04:40:17.563557 | orchestrator |  "testbed-manager" 2026-02-02 04:40:17.563565 | orchestrator |  ] 2026-02-02 04:40:17.563573 | orchestrator | } 2026-02-02 04:40:17.563582 | orchestrator | 2026-02-02 04:40:17.563589 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:40:17.563599 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-02 04:40:17.563609 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-02 04:40:17.563617 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-02 04:40:17.563625 | orchestrator | 2026-02-02 04:40:17.563633 | orchestrator | 2026-02-02 04:40:17.563641 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:40:17.563649 | orchestrator | Monday 02 February 2026 04:40:17 +0000 (0:00:00.876) 0:00:26.211 ******* 2026-02-02 04:40:17.563657 | orchestrator | =============================================================================== 2026-02-02 04:40:17.563665 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.55s 2026-02-02 04:40:17.563672 | orchestrator | Aggregate test results step one ----------------------------------------- 1.69s 2026-02-02 04:40:17.563680 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.57s 2026-02-02 04:40:17.563688 | orchestrator | Write report file ------------------------------------------------------- 1.55s 2026-02-02 04:40:17.563696 | orchestrator | Print report file information ------------------------------------------- 0.88s 2026-02-02 04:40:17.563704 | orchestrator | Get timestamp for report file ------------------------------------------- 0.84s 2026-02-02 04:40:17.563712 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.83s 2026-02-02 04:40:17.563720 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.73s 2026-02-02 04:40:17.563728 | orchestrator | Create report output directory ------------------------------------------ 0.73s 2026-02-02 04:40:17.563735 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.71s 2026-02-02 04:40:17.563743 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.55s 2026-02-02 04:40:17.563751 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.55s 2026-02-02 04:40:17.563759 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.53s 2026-02-02 04:40:17.563767 | orchestrator | Prepare test data ------------------------------------------------------- 0.53s 2026-02-02 04:40:17.563775 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.52s 2026-02-02 04:40:17.563783 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.52s 2026-02-02 04:40:17.563790 | orchestrator | Set test result to passed if all containers are running ----------------- 0.50s 2026-02-02 04:40:17.563798 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.48s 2026-02-02 04:40:17.563806 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.48s 2026-02-02 04:40:17.563814 | orchestrator | Get OSDs that are not up or in ------------------------------------------ 0.33s 2026-02-02 04:40:17.904596 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-02 04:40:17.909927 | orchestrator | + set -e 2026-02-02 04:40:17.910086 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 04:40:17.910102 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 04:40:17.910114 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 04:40:17.910124 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 04:40:17.910135 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 04:40:17.910146 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 04:40:17.910158 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 04:40:17.910169 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 04:40:17.910180 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 04:40:17.910191 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-02 04:40:17.910202 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-02 04:40:17.910213 | orchestrator | ++ export ARA=false 2026-02-02 04:40:17.910225 | orchestrator | ++ ARA=false 2026-02-02 04:40:17.910243 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 04:40:17.910261 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 04:40:17.910279 | orchestrator | ++ export TEMPEST=false 2026-02-02 04:40:17.910298 | orchestrator | ++ TEMPEST=false 2026-02-02 04:40:17.910316 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 04:40:17.910334 | orchestrator | ++ IS_ZUUL=true 2026-02-02 04:40:17.910352 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 04:40:17.910367 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 04:40:17.910378 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 04:40:17.910389 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 04:40:17.910399 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 04:40:17.910410 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 04:40:17.910421 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 04:40:17.910432 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 04:40:17.910443 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 04:40:17.910454 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 04:40:17.910465 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-02 04:40:17.910475 | orchestrator | + source /etc/os-release 2026-02-02 04:40:17.910486 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-02-02 04:40:17.910497 | orchestrator | ++ NAME=Ubuntu 2026-02-02 04:40:17.910510 | orchestrator | ++ VERSION_ID=24.04 2026-02-02 04:40:17.910523 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-02-02 04:40:17.910535 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-02 04:40:17.910547 | orchestrator | ++ ID=ubuntu 2026-02-02 04:40:17.910560 | orchestrator | ++ ID_LIKE=debian 2026-02-02 04:40:17.910572 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-02 04:40:17.910584 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-02 04:40:17.910597 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-02 04:40:17.910610 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-02 04:40:17.910623 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-02 04:40:17.910636 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-02 04:40:17.910648 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-02 04:40:17.910662 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-02 04:40:17.910676 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-02 04:40:17.936046 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-02 04:40:40.806372 | orchestrator | 2026-02-02 04:40:40.806514 | orchestrator | # Status of Elasticsearch 2026-02-02 04:40:40.806534 | orchestrator | 2026-02-02 04:40:40.806547 | orchestrator | + pushd /opt/configuration/contrib 2026-02-02 04:40:40.806560 | orchestrator | + echo 2026-02-02 04:40:40.806572 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-02 04:40:40.806583 | orchestrator | + echo 2026-02-02 04:40:40.806595 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-02 04:40:41.008817 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-02 04:40:41.009270 | orchestrator | 2026-02-02 04:40:41.009302 | orchestrator | # Status of MariaDB 2026-02-02 04:40:41.009314 | orchestrator | 2026-02-02 04:40:41.009325 | orchestrator | + echo 2026-02-02 04:40:41.009365 | orchestrator | + echo '# Status of MariaDB' 2026-02-02 04:40:41.009376 | orchestrator | + echo 2026-02-02 04:40:41.011966 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-02 04:40:41.069500 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-02 04:40:41.069595 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-02 04:40:41.069611 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-02 04:40:41.069623 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-02 04:40:41.124366 | orchestrator | Reading package lists... 2026-02-02 04:40:41.492012 | orchestrator | Building dependency tree... 2026-02-02 04:40:41.492744 | orchestrator | Reading state information... 2026-02-02 04:40:41.929768 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-02 04:40:41.929869 | orchestrator | bc set to manually installed. 2026-02-02 04:40:41.929966 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-02-02 04:40:42.605684 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-02 04:40:42.606789 | orchestrator | 2026-02-02 04:40:42.606814 | orchestrator | # Status of Prometheus 2026-02-02 04:40:42.606821 | orchestrator | 2026-02-02 04:40:42.606827 | orchestrator | + echo 2026-02-02 04:40:42.606833 | orchestrator | + echo '# Status of Prometheus' 2026-02-02 04:40:42.606839 | orchestrator | + echo 2026-02-02 04:40:42.606844 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-02 04:40:42.674812 | orchestrator | Unauthorized 2026-02-02 04:40:42.678530 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-02 04:40:42.736301 | orchestrator | Unauthorized 2026-02-02 04:40:42.742863 | orchestrator | 2026-02-02 04:40:42.742982 | orchestrator | # Status of RabbitMQ 2026-02-02 04:40:42.742998 | orchestrator | 2026-02-02 04:40:42.743009 | orchestrator | + echo 2026-02-02 04:40:42.743019 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-02 04:40:42.743030 | orchestrator | + echo 2026-02-02 04:40:42.744167 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-02 04:40:42.807062 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-02 04:40:42.807151 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-02 04:40:42.807166 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-02 04:40:43.254298 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-02 04:40:43.267589 | orchestrator | 2026-02-02 04:40:43.267651 | orchestrator | # Status of Redis 2026-02-02 04:40:43.267671 | orchestrator | 2026-02-02 04:40:43.267689 | orchestrator | + echo 2026-02-02 04:40:43.267700 | orchestrator | + echo '# Status of Redis' 2026-02-02 04:40:43.267710 | orchestrator | + echo 2026-02-02 04:40:43.267721 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-02 04:40:43.276030 | orchestrator | TCP OK - 0.003 second response time on 192.168.16.10 port 6379|time=0.002504s;;;0.000000;10.000000 2026-02-02 04:40:43.276747 | orchestrator | + popd 2026-02-02 04:40:43.277416 | orchestrator | 2026-02-02 04:40:43.277452 | orchestrator | # Create backup of MariaDB database 2026-02-02 04:40:43.277465 | orchestrator | 2026-02-02 04:40:43.277476 | orchestrator | + echo 2026-02-02 04:40:43.277488 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-02 04:40:43.277499 | orchestrator | + echo 2026-02-02 04:40:43.277512 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-02 04:40:45.391789 | orchestrator | 2026-02-02 04:40:45 | INFO  | Task 126e2902-2aec-40e7-b854-38675a10bc42 (mariadb_backup) was prepared for execution. 2026-02-02 04:40:45.391978 | orchestrator | 2026-02-02 04:40:45 | INFO  | It takes a moment until task 126e2902-2aec-40e7-b854-38675a10bc42 (mariadb_backup) has been started and output is visible here. 2026-02-02 04:43:52.706997 | orchestrator | 2026-02-02 04:43:52.707148 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 04:43:52.707167 | orchestrator | 2026-02-02 04:43:52.707180 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 04:43:52.707193 | orchestrator | Monday 02 February 2026 04:40:49 +0000 (0:00:00.174) 0:00:00.174 ******* 2026-02-02 04:43:52.707205 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:43:52.707218 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:43:52.707229 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:43:52.707240 | orchestrator | 2026-02-02 04:43:52.707251 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 04:43:52.707292 | orchestrator | Monday 02 February 2026 04:40:49 +0000 (0:00:00.343) 0:00:00.518 ******* 2026-02-02 04:43:52.707304 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-02 04:43:52.707316 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-02 04:43:52.707326 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-02 04:43:52.707370 | orchestrator | 2026-02-02 04:43:52.707383 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-02 04:43:52.707394 | orchestrator | 2026-02-02 04:43:52.707405 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-02 04:43:52.707416 | orchestrator | Monday 02 February 2026 04:40:50 +0000 (0:00:00.599) 0:00:01.118 ******* 2026-02-02 04:43:52.707427 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 04:43:52.707438 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-02 04:43:52.707449 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-02 04:43:52.707460 | orchestrator | 2026-02-02 04:43:52.707471 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-02 04:43:52.707482 | orchestrator | Monday 02 February 2026 04:40:50 +0000 (0:00:00.399) 0:00:01.518 ******* 2026-02-02 04:43:52.707493 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:43:52.707509 | orchestrator | 2026-02-02 04:43:52.707523 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-02 04:43:52.707555 | orchestrator | Monday 02 February 2026 04:40:51 +0000 (0:00:00.563) 0:00:02.082 ******* 2026-02-02 04:43:52.707568 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:43:52.707581 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:43:52.707593 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:43:52.707606 | orchestrator | 2026-02-02 04:43:52.707619 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-02 04:43:52.707632 | orchestrator | Monday 02 February 2026 04:40:54 +0000 (0:00:03.172) 0:00:05.254 ******* 2026-02-02 04:43:52.707645 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:43:52.707658 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:43:52.707671 | orchestrator | 2026-02-02 04:43:52.707683 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-02-02 04:43:52.707696 | orchestrator | 2026-02-02 04:43:52.707710 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-02-02 04:43:52.707722 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-02 04:43:52.707735 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-02 04:43:52.707747 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-02 04:43:52.707759 | orchestrator | mariadb_bootstrap_restart 2026-02-02 04:43:52.707772 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:43:52.707784 | orchestrator | 2026-02-02 04:43:52.707798 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-02 04:43:52.707810 | orchestrator | skipping: no hosts matched 2026-02-02 04:43:52.707823 | orchestrator | 2026-02-02 04:43:52.707836 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-02 04:43:52.707849 | orchestrator | skipping: no hosts matched 2026-02-02 04:43:52.707861 | orchestrator | 2026-02-02 04:43:52.707874 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-02 04:43:52.707885 | orchestrator | skipping: no hosts matched 2026-02-02 04:43:52.707896 | orchestrator | 2026-02-02 04:43:52.707907 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-02 04:43:52.707918 | orchestrator | 2026-02-02 04:43:52.707928 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-02 04:43:52.707939 | orchestrator | Monday 02 February 2026 04:43:51 +0000 (0:02:56.948) 0:03:02.202 ******* 2026-02-02 04:43:52.707961 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:43:52.707972 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:43:52.707982 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:43:52.707993 | orchestrator | 2026-02-02 04:43:52.708004 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-02 04:43:52.708015 | orchestrator | Monday 02 February 2026 04:43:51 +0000 (0:00:00.317) 0:03:02.520 ******* 2026-02-02 04:43:52.708025 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:43:52.708036 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:43:52.708047 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:43:52.708058 | orchestrator | 2026-02-02 04:43:52.708068 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:43:52.708081 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:43:52.708092 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 04:43:52.708104 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 04:43:52.708114 | orchestrator | 2026-02-02 04:43:52.708125 | orchestrator | 2026-02-02 04:43:52.708136 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:43:52.708146 | orchestrator | Monday 02 February 2026 04:43:52 +0000 (0:00:00.413) 0:03:02.933 ******* 2026-02-02 04:43:52.708176 | orchestrator | =============================================================================== 2026-02-02 04:43:52.708188 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 176.95s 2026-02-02 04:43:52.708199 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.17s 2026-02-02 04:43:52.708210 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-02-02 04:43:52.708221 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2026-02-02 04:43:52.708231 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.41s 2026-02-02 04:43:52.708242 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2026-02-02 04:43:52.708253 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-02-02 04:43:52.708264 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-02-02 04:43:53.035487 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-02 04:43:53.042911 | orchestrator | + set -e 2026-02-02 04:43:53.042933 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 04:43:53.043785 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 04:43:53.043794 | orchestrator | ++ INTERACTIVE=false 2026-02-02 04:43:53.043799 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 04:43:53.043838 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 04:43:53.043845 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-02 04:43:53.045724 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-02 04:43:53.053742 | orchestrator | 2026-02-02 04:43:53.053786 | orchestrator | # OpenStack endpoints 2026-02-02 04:43:53.053792 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 04:43:53.053797 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 04:43:53.053801 | orchestrator | + export OS_CLOUD=admin 2026-02-02 04:43:53.053805 | orchestrator | + OS_CLOUD=admin 2026-02-02 04:43:53.053809 | orchestrator | + echo 2026-02-02 04:43:53.053814 | orchestrator | + echo '# OpenStack endpoints' 2026-02-02 04:43:53.053818 | orchestrator | 2026-02-02 04:43:53.053822 | orchestrator | + echo 2026-02-02 04:43:53.053826 | orchestrator | + openstack endpoint list 2026-02-02 04:43:56.209715 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-02 04:43:56.209825 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-02 04:43:56.209864 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-02 04:43:56.209895 | orchestrator | | 093dabdbaaa64386be7ab66ff383b147 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-02 04:43:56.209907 | orchestrator | | 2b7ffbb9e84b4b89a85f2b9e5124acf0 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-02 04:43:56.209918 | orchestrator | | 392e2f2bc962447398b3e8be861c2306 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-02 04:43:56.209929 | orchestrator | | 3b5f5facf50641839eabcdb5f70f29d1 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-02 04:43:56.209940 | orchestrator | | 3b74f02e251e4cba8259d378490229e4 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-02 04:43:56.209951 | orchestrator | | 3d33d5d073d4491cb91b5a7094c272bd | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-02 04:43:56.209961 | orchestrator | | 4402da2289c441cdaa149219c67bdd88 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-02 04:43:56.209972 | orchestrator | | 5255f3bc71534dfe9ca363fa18d3d9eb | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-02 04:43:56.209983 | orchestrator | | 6042fa3cc2c94899aacc79191a40d6a4 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-02 04:43:56.209994 | orchestrator | | 6612088b0eef4567a2ce79da6bfd66ab | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-02 04:43:56.210005 | orchestrator | | 684d00877a31417da17a995acd86cc9d | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-02 04:43:56.210061 | orchestrator | | 85176da7622b4b5b89831513a7f687bf | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-02 04:43:56.210074 | orchestrator | | 8869bf7f5eb2467e9a99e5e9a220624d | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-02 04:43:56.210085 | orchestrator | | 8a4582f8f6b642e2b1e353aa641993bc | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-02 04:43:56.210096 | orchestrator | | 9ed23f79149b42a6ac44907ef6952ae5 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-02 04:43:56.210106 | orchestrator | | a201d4203db241a9b775cad29adb1435 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-02 04:43:56.210117 | orchestrator | | a6f9336e7fa34770960b68aaacd07ae5 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-02 04:43:56.210128 | orchestrator | | a8998ad244174d69867b62920109a2c8 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-02 04:43:56.210139 | orchestrator | | b842e03b997f4c93bc415baccce231df | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-02 04:43:56.210160 | orchestrator | | c30d7cad769241daa64a33414e82f86c | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-02 04:43:56.210188 | orchestrator | | ccbf3edd237f47e0ac6f37f514a136b7 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-02 04:43:56.210205 | orchestrator | | cf60e94a03c4475f9bb4e7a8c0509dfc | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-02 04:43:56.210216 | orchestrator | | d109555ed5d64070a61c09dce4d88f95 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-02 04:43:56.210227 | orchestrator | | dda7f8bb637c400a8a4a22d8a1af71b1 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-02 04:43:56.210239 | orchestrator | | dffe1aa4a7a54875b71e54497c1c09d7 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-02 04:43:56.210253 | orchestrator | | e672cc9cc2d84d1c9b748f9722dac197 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-02 04:43:56.210266 | orchestrator | | e83f44088d4542c3869c036a00bd69f0 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-02 04:43:56.210279 | orchestrator | | ee6ac7a8086b49ce83a564bf358d99ce | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-02 04:43:56.210291 | orchestrator | | fa8ad9667d7243b0ba1c0569f957baf3 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-02 04:43:56.210304 | orchestrator | | fc0d4071342b40b981176c89c5c98416 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-02 04:43:56.210317 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-02 04:43:56.490851 | orchestrator | 2026-02-02 04:43:56.490952 | orchestrator | # Cinder 2026-02-02 04:43:56.490968 | orchestrator | 2026-02-02 04:43:56.490980 | orchestrator | + echo 2026-02-02 04:43:56.490991 | orchestrator | + echo '# Cinder' 2026-02-02 04:43:56.491005 | orchestrator | + echo 2026-02-02 04:43:56.491025 | orchestrator | + openstack volume service list 2026-02-02 04:43:59.192628 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-02 04:43:59.192738 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-02 04:43:59.192754 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-02 04:43:59.192766 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-02T04:43:49.000000 | 2026-02-02 04:43:59.192777 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-02T04:43:49.000000 | 2026-02-02 04:43:59.192789 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-02T04:43:49.000000 | 2026-02-02 04:43:59.192800 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-02T04:43:49.000000 | 2026-02-02 04:43:59.192811 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-02T04:43:53.000000 | 2026-02-02 04:43:59.192821 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-02T04:43:56.000000 | 2026-02-02 04:43:59.192832 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-02T04:43:57.000000 | 2026-02-02 04:43:59.192882 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-02T04:43:49.000000 | 2026-02-02 04:43:59.192894 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-02T04:43:50.000000 | 2026-02-02 04:43:59.192905 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-02 04:43:59.485131 | orchestrator | 2026-02-02 04:43:59.485227 | orchestrator | # Neutron 2026-02-02 04:43:59.485242 | orchestrator | 2026-02-02 04:43:59.485254 | orchestrator | + echo 2026-02-02 04:43:59.485266 | orchestrator | + echo '# Neutron' 2026-02-02 04:43:59.485277 | orchestrator | + echo 2026-02-02 04:43:59.485289 | orchestrator | + openstack network agent list 2026-02-02 04:44:02.184000 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-02 04:44:02.184147 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-02 04:44:02.184173 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-02 04:44:02.184184 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-02 04:44:02.184195 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-02 04:44:02.184204 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-02 04:44:02.184226 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-02 04:44:02.184237 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-02 04:44:02.184243 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-02 04:44:02.184249 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-02 04:44:02.184254 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-02 04:44:02.184260 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-02 04:44:02.184266 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-02 04:44:02.481914 | orchestrator | + openstack network service provider list 2026-02-02 04:44:04.945459 | orchestrator | +---------------+------+---------+ 2026-02-02 04:44:04.945570 | orchestrator | | Service Type | Name | Default | 2026-02-02 04:44:04.945658 | orchestrator | +---------------+------+---------+ 2026-02-02 04:44:04.945671 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-02 04:44:04.945682 | orchestrator | +---------------+------+---------+ 2026-02-02 04:44:05.250283 | orchestrator | 2026-02-02 04:44:05.250456 | orchestrator | # Nova 2026-02-02 04:44:05.250475 | orchestrator | 2026-02-02 04:44:05.250488 | orchestrator | + echo 2026-02-02 04:44:05.250499 | orchestrator | + echo '# Nova' 2026-02-02 04:44:05.250511 | orchestrator | + echo 2026-02-02 04:44:05.250524 | orchestrator | + openstack compute service list 2026-02-02 04:44:07.916405 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-02 04:44:07.916509 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-02 04:44:07.916550 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-02 04:44:07.916564 | orchestrator | | e417ce23-d7a6-474d-a297-6c94a96a619d | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-02T04:43:58.000000 | 2026-02-02 04:44:07.916575 | orchestrator | | a1f82b1e-f3c1-4c5e-a05a-b9ae78b78c49 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-02T04:44:02.000000 | 2026-02-02 04:44:07.916586 | orchestrator | | f9ea4457-70cd-4da3-b60f-1858af216069 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-02T04:44:04.000000 | 2026-02-02 04:44:07.916597 | orchestrator | | c44eb672-5853-4fed-8f00-a1a6a4c98685 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-02T04:44:07.000000 | 2026-02-02 04:44:07.916608 | orchestrator | | 3604806e-f4e9-4093-9b16-082ca9632b91 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-02T04:44:00.000000 | 2026-02-02 04:44:07.916619 | orchestrator | | f198c73a-e703-4565-a892-78a0b18d8691 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-02T04:44:00.000000 | 2026-02-02 04:44:07.916629 | orchestrator | | 29fd858e-1a5b-456e-b57a-52b73a56b5f5 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-02T04:44:01.000000 | 2026-02-02 04:44:07.916640 | orchestrator | | 4761e6f6-d99c-4725-bbdb-e9695a7c2645 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-02T04:44:01.000000 | 2026-02-02 04:44:07.916651 | orchestrator | | 794f4a27-f34e-47c9-9631-932c962076a9 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-02T04:44:02.000000 | 2026-02-02 04:44:07.916662 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-02 04:44:08.212073 | orchestrator | + openstack hypervisor list 2026-02-02 04:44:10.914449 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-02 04:44:10.914563 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-02 04:44:10.914583 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-02 04:44:10.914596 | orchestrator | | aa6c795e-fcd3-41d3-8795-c2b8df4ee308 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-02 04:44:10.914610 | orchestrator | | 119e1182-44e5-49d1-bc06-d6fdf2063cc0 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-02 04:44:10.914625 | orchestrator | | db79c99a-a63c-42f7-937a-539ec9d4e2ae | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-02 04:44:10.914644 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-02 04:44:11.230074 | orchestrator | 2026-02-02 04:44:11.230173 | orchestrator | # Run OpenStack test play 2026-02-02 04:44:11.230189 | orchestrator | 2026-02-02 04:44:11.230196 | orchestrator | + echo 2026-02-02 04:44:11.230204 | orchestrator | + echo '# Run OpenStack test play' 2026-02-02 04:44:11.230211 | orchestrator | + echo 2026-02-02 04:44:11.230218 | orchestrator | + osism apply --environment openstack test 2026-02-02 04:44:13.218823 | orchestrator | 2026-02-02 04:44:13 | INFO  | Trying to run play test in environment openstack 2026-02-02 04:44:23.393131 | orchestrator | 2026-02-02 04:44:23 | INFO  | Task 572f7f2d-3efc-4447-b9cc-15f7eaae02ea (test) was prepared for execution. 2026-02-02 04:44:23.393225 | orchestrator | 2026-02-02 04:44:23 | INFO  | It takes a moment until task 572f7f2d-3efc-4447-b9cc-15f7eaae02ea (test) has been started and output is visible here. 2026-02-02 04:46:57.462202 | orchestrator | 2026-02-02 04:46:57.462314 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-02 04:46:57.462331 | orchestrator | 2026-02-02 04:46:57.462344 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-02 04:46:57.462356 | orchestrator | Monday 02 February 2026 04:44:27 +0000 (0:00:00.077) 0:00:00.077 ******* 2026-02-02 04:46:57.462367 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.462379 | orchestrator | 2026-02-02 04:46:57.462390 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-02 04:46:57.462425 | orchestrator | Monday 02 February 2026 04:44:31 +0000 (0:00:03.705) 0:00:03.784 ******* 2026-02-02 04:46:57.462437 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.462447 | orchestrator | 2026-02-02 04:46:57.462458 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-02 04:46:57.462469 | orchestrator | Monday 02 February 2026 04:44:35 +0000 (0:00:04.318) 0:00:08.102 ******* 2026-02-02 04:46:57.462480 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.462491 | orchestrator | 2026-02-02 04:46:57.462502 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-02 04:46:57.462512 | orchestrator | Monday 02 February 2026 04:44:42 +0000 (0:00:06.346) 0:00:14.449 ******* 2026-02-02 04:46:57.462523 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.462534 | orchestrator | 2026-02-02 04:46:57.462544 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-02 04:46:57.462555 | orchestrator | Monday 02 February 2026 04:44:45 +0000 (0:00:03.962) 0:00:18.411 ******* 2026-02-02 04:46:57.462567 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.462578 | orchestrator | 2026-02-02 04:46:57.462589 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-02 04:46:57.462602 | orchestrator | Monday 02 February 2026 04:44:50 +0000 (0:00:04.131) 0:00:22.543 ******* 2026-02-02 04:46:57.462614 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-02 04:46:57.462627 | orchestrator | changed: [localhost] => (item=member) 2026-02-02 04:46:57.462641 | orchestrator | changed: [localhost] => (item=creator) 2026-02-02 04:46:57.462654 | orchestrator | 2026-02-02 04:46:57.462666 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-02 04:46:57.462679 | orchestrator | Monday 02 February 2026 04:45:01 +0000 (0:00:11.506) 0:00:34.049 ******* 2026-02-02 04:46:57.462693 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.462705 | orchestrator | 2026-02-02 04:46:57.462718 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-02 04:46:57.462730 | orchestrator | Monday 02 February 2026 04:45:05 +0000 (0:00:04.247) 0:00:38.296 ******* 2026-02-02 04:46:57.462743 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.462755 | orchestrator | 2026-02-02 04:46:57.462768 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-02 04:46:57.462781 | orchestrator | Monday 02 February 2026 04:45:10 +0000 (0:00:04.783) 0:00:43.080 ******* 2026-02-02 04:46:57.462793 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.462806 | orchestrator | 2026-02-02 04:46:57.462818 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-02 04:46:57.462832 | orchestrator | Monday 02 February 2026 04:45:14 +0000 (0:00:04.253) 0:00:47.333 ******* 2026-02-02 04:46:57.462844 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.462855 | orchestrator | 2026-02-02 04:46:57.462866 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-02 04:46:57.462877 | orchestrator | Monday 02 February 2026 04:45:18 +0000 (0:00:03.928) 0:00:51.262 ******* 2026-02-02 04:46:57.462888 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.462898 | orchestrator | 2026-02-02 04:46:57.462909 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-02 04:46:57.462985 | orchestrator | Monday 02 February 2026 04:45:23 +0000 (0:00:04.179) 0:00:55.441 ******* 2026-02-02 04:46:57.463168 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.463191 | orchestrator | 2026-02-02 04:46:57.463206 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-02 04:46:57.463221 | orchestrator | Monday 02 February 2026 04:45:27 +0000 (0:00:04.351) 0:00:59.793 ******* 2026-02-02 04:46:57.463237 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.463253 | orchestrator | 2026-02-02 04:46:57.463270 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-02 04:46:57.463289 | orchestrator | Monday 02 February 2026 04:45:31 +0000 (0:00:04.500) 0:01:04.293 ******* 2026-02-02 04:46:57.463322 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.463334 | orchestrator | 2026-02-02 04:46:57.463344 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-02 04:46:57.463354 | orchestrator | Monday 02 February 2026 04:45:37 +0000 (0:00:05.143) 0:01:09.436 ******* 2026-02-02 04:46:57.463363 | orchestrator | changed: [localhost] 2026-02-02 04:46:57.463373 | orchestrator | 2026-02-02 04:46:57.463383 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-02 04:46:57.463392 | orchestrator | 2026-02-02 04:46:57.463401 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-02 04:46:57.463411 | orchestrator | Monday 02 February 2026 04:45:47 +0000 (0:00:10.102) 0:01:19.539 ******* 2026-02-02 04:46:57.463420 | orchestrator | ok: [localhost] 2026-02-02 04:46:57.463430 | orchestrator | 2026-02-02 04:46:57.463439 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-02 04:46:57.463449 | orchestrator | Monday 02 February 2026 04:45:50 +0000 (0:00:03.520) 0:01:23.060 ******* 2026-02-02 04:46:57.463458 | orchestrator | skipping: [localhost] 2026-02-02 04:46:57.463468 | orchestrator | 2026-02-02 04:46:57.463477 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-02 04:46:57.463487 | orchestrator | Monday 02 February 2026 04:45:50 +0000 (0:00:00.033) 0:01:23.094 ******* 2026-02-02 04:46:57.463510 | orchestrator | skipping: [localhost] 2026-02-02 04:46:57.463520 | orchestrator | 2026-02-02 04:46:57.463529 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-02 04:46:57.463557 | orchestrator | Monday 02 February 2026 04:45:50 +0000 (0:00:00.049) 0:01:23.143 ******* 2026-02-02 04:46:57.463568 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-02 04:46:57.463578 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-02 04:46:57.463608 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-02 04:46:57.463619 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-02 04:46:57.463629 | orchestrator | skipping: [localhost] => (item=test)  2026-02-02 04:46:57.463639 | orchestrator | skipping: [localhost] 2026-02-02 04:46:57.463648 | orchestrator | 2026-02-02 04:46:57.463658 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-02 04:46:57.463668 | orchestrator | Monday 02 February 2026 04:45:50 +0000 (0:00:00.167) 0:01:23.310 ******* 2026-02-02 04:46:57.463678 | orchestrator | skipping: [localhost] 2026-02-02 04:46:57.463687 | orchestrator | 2026-02-02 04:46:57.463696 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-02 04:46:57.463706 | orchestrator | Monday 02 February 2026 04:45:51 +0000 (0:00:00.159) 0:01:23.470 ******* 2026-02-02 04:46:57.463716 | orchestrator | changed: [localhost] => (item=test) 2026-02-02 04:46:57.463725 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-02 04:46:57.463735 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-02 04:46:57.463744 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-02 04:46:57.463754 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-02 04:46:57.463763 | orchestrator | 2026-02-02 04:46:57.463773 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-02 04:46:57.463782 | orchestrator | Monday 02 February 2026 04:45:55 +0000 (0:00:04.663) 0:01:28.133 ******* 2026-02-02 04:46:57.463792 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-02 04:46:57.463803 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-02 04:46:57.463812 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-02 04:46:57.463822 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-02 04:46:57.463834 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j664681641889.3720', 'results_file': '/ansible/.ansible_async/j664681641889.3720', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-02 04:46:57.463853 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j930903068131.3745', 'results_file': '/ansible/.ansible_async/j930903068131.3745', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-02 04:46:57.463864 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j330967060761.3770', 'results_file': '/ansible/.ansible_async/j330967060761.3770', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-02 04:46:57.463874 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j569600280047.3795', 'results_file': '/ansible/.ansible_async/j569600280047.3795', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-02 04:46:57.463883 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j880566872885.3820', 'results_file': '/ansible/.ansible_async/j880566872885.3820', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-02 04:46:57.463893 | orchestrator | 2026-02-02 04:46:57.463903 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-02 04:46:57.463913 | orchestrator | Monday 02 February 2026 04:46:42 +0000 (0:00:47.134) 0:02:15.268 ******* 2026-02-02 04:46:57.463922 | orchestrator | changed: [localhost] => (item=test) 2026-02-02 04:46:57.463932 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-02 04:46:57.463942 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-02 04:46:57.463951 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-02 04:46:57.463961 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-02 04:46:57.463973 | orchestrator | 2026-02-02 04:46:57.463989 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-02 04:46:57.464005 | orchestrator | Monday 02 February 2026 04:46:47 +0000 (0:00:05.082) 0:02:20.351 ******* 2026-02-02 04:46:57.464047 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-02 04:46:57.464066 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j290449236696.3924', 'results_file': '/ansible/.ansible_async/j290449236696.3924', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-02 04:46:57.464083 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j748501556771.3949', 'results_file': '/ansible/.ansible_async/j748501556771.3949', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-02 04:46:57.464102 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j39642865294.3974', 'results_file': '/ansible/.ansible_async/j39642865294.3974', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-02 04:46:57.464136 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j965005921417.3999', 'results_file': '/ansible/.ansible_async/j965005921417.3999', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-02 04:47:37.776625 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j558869095165.4024', 'results_file': '/ansible/.ansible_async/j558869095165.4024', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-02 04:47:37.776743 | orchestrator | 2026-02-02 04:47:37.776762 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-02 04:47:37.776776 | orchestrator | Monday 02 February 2026 04:46:57 +0000 (0:00:09.509) 0:02:29.860 ******* 2026-02-02 04:47:37.776788 | orchestrator | changed: [localhost] => (item=test) 2026-02-02 04:47:37.776797 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-02 04:47:37.776804 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-02 04:47:37.776811 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-02 04:47:37.776835 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-02 04:47:37.776842 | orchestrator | 2026-02-02 04:47:37.776850 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-02 04:47:37.776856 | orchestrator | Monday 02 February 2026 04:47:02 +0000 (0:00:04.919) 0:02:34.779 ******* 2026-02-02 04:47:37.776863 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-02 04:47:37.776870 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j834558411737.4093', 'results_file': '/ansible/.ansible_async/j834558411737.4093', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-02 04:47:37.776877 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j266903206692.4118', 'results_file': '/ansible/.ansible_async/j266903206692.4118', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-02 04:47:37.776883 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j796637200164.4144', 'results_file': '/ansible/.ansible_async/j796637200164.4144', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-02 04:47:37.776890 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j64236939857.4170', 'results_file': '/ansible/.ansible_async/j64236939857.4170', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-02 04:47:37.776896 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j710836139507.4196', 'results_file': '/ansible/.ansible_async/j710836139507.4196', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-02 04:47:37.776902 | orchestrator | 2026-02-02 04:47:37.776909 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-02 04:47:37.776915 | orchestrator | Monday 02 February 2026 04:47:12 +0000 (0:00:10.098) 0:02:44.878 ******* 2026-02-02 04:47:37.776921 | orchestrator | changed: [localhost] 2026-02-02 04:47:37.776927 | orchestrator | 2026-02-02 04:47:37.776934 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-02 04:47:37.776940 | orchestrator | Monday 02 February 2026 04:47:18 +0000 (0:00:06.437) 0:02:51.316 ******* 2026-02-02 04:47:37.776946 | orchestrator | changed: [localhost] 2026-02-02 04:47:37.776952 | orchestrator | 2026-02-02 04:47:37.776993 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-02 04:47:37.777001 | orchestrator | Monday 02 February 2026 04:47:32 +0000 (0:00:13.407) 0:03:04.724 ******* 2026-02-02 04:47:37.777007 | orchestrator | ok: [localhost] 2026-02-02 04:47:37.777013 | orchestrator | 2026-02-02 04:47:37.777020 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-02 04:47:37.777026 | orchestrator | Monday 02 February 2026 04:47:37 +0000 (0:00:05.107) 0:03:09.832 ******* 2026-02-02 04:47:37.777032 | orchestrator | ok: [localhost] => { 2026-02-02 04:47:37.777042 | orchestrator |  "msg": "192.168.112.116" 2026-02-02 04:47:37.777052 | orchestrator | } 2026-02-02 04:47:37.777064 | orchestrator | 2026-02-02 04:47:37.777074 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:47:37.777086 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 04:47:37.777098 | orchestrator | 2026-02-02 04:47:37.777108 | orchestrator | 2026-02-02 04:47:37.777117 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:47:37.777128 | orchestrator | Monday 02 February 2026 04:47:37 +0000 (0:00:00.042) 0:03:09.874 ******* 2026-02-02 04:47:37.777138 | orchestrator | =============================================================================== 2026-02-02 04:47:37.777148 | orchestrator | Wait for instance creation to complete --------------------------------- 47.13s 2026-02-02 04:47:37.777158 | orchestrator | Attach test volume ----------------------------------------------------- 13.41s 2026-02-02 04:47:37.777195 | orchestrator | Add member roles to user test ------------------------------------------ 11.51s 2026-02-02 04:47:37.777207 | orchestrator | Create test router ----------------------------------------------------- 10.10s 2026-02-02 04:47:37.777218 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.10s 2026-02-02 04:47:37.777228 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.51s 2026-02-02 04:47:37.777239 | orchestrator | Create test volume ------------------------------------------------------ 6.44s 2026-02-02 04:47:37.777267 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.35s 2026-02-02 04:47:37.777279 | orchestrator | Create test subnet ------------------------------------------------------ 5.14s 2026-02-02 04:47:37.777290 | orchestrator | Create floating ip address ---------------------------------------------- 5.11s 2026-02-02 04:47:37.777300 | orchestrator | Add metadata to instances ----------------------------------------------- 5.08s 2026-02-02 04:47:37.777311 | orchestrator | Add tag to instances ---------------------------------------------------- 4.92s 2026-02-02 04:47:37.777320 | orchestrator | Create ssh security group ----------------------------------------------- 4.78s 2026-02-02 04:47:37.777329 | orchestrator | Create test instances --------------------------------------------------- 4.66s 2026-02-02 04:47:37.777339 | orchestrator | Create test network ----------------------------------------------------- 4.50s 2026-02-02 04:47:37.777349 | orchestrator | Create test keypair ----------------------------------------------------- 4.35s 2026-02-02 04:47:37.777359 | orchestrator | Create test-admin user -------------------------------------------------- 4.32s 2026-02-02 04:47:37.777371 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.25s 2026-02-02 04:47:37.777381 | orchestrator | Create test server group ------------------------------------------------ 4.25s 2026-02-02 04:47:37.777391 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.18s 2026-02-02 04:47:38.131580 | orchestrator | + server_list 2026-02-02 04:47:38.131675 | orchestrator | + openstack --os-cloud test server list 2026-02-02 04:47:41.997523 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-02 04:47:41.997640 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-02 04:47:41.997665 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-02 04:47:41.997683 | orchestrator | | df0b3c9f-83b5-4d40-ab9d-f7fac922cbe0 | test-4 | ACTIVE | test=192.168.112.149, 192.168.200.176 | N/A (booted from volume) | SCS-1L-1 | 2026-02-02 04:47:41.997700 | orchestrator | | 0083bef7-cb37-4d2f-9784-ec262cc82776 | test-2 | ACTIVE | test=192.168.112.139, 192.168.200.235 | N/A (booted from volume) | SCS-1L-1 | 2026-02-02 04:47:41.997718 | orchestrator | | 435b0e07-ce0e-4f18-9857-398bd9abfd63 | test-3 | ACTIVE | test=192.168.112.154, 192.168.200.185 | N/A (booted from volume) | SCS-1L-1 | 2026-02-02 04:47:41.997738 | orchestrator | | 51516c7a-ece4-488b-91d3-b74d2ef355f1 | test | ACTIVE | test=192.168.112.116, 192.168.200.214 | N/A (booted from volume) | SCS-1L-1 | 2026-02-02 04:47:41.997756 | orchestrator | | 754ca09e-0ddd-479f-84a9-ea56aa71d8a2 | test-1 | ACTIVE | test=192.168.112.113, 192.168.200.151 | N/A (booted from volume) | SCS-1L-1 | 2026-02-02 04:47:41.997776 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-02 04:47:42.304800 | orchestrator | + openstack --os-cloud test server show test 2026-02-02 04:47:45.686277 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:45.686448 | orchestrator | | Field | Value | 2026-02-02 04:47:45.686467 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:45.686487 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-02 04:47:45.686499 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-02 04:47:45.686511 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-02 04:47:45.686522 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-02 04:47:45.686533 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-02 04:47:45.686544 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-02 04:47:45.686576 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-02 04:47:45.686589 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-02 04:47:45.686608 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-02 04:47:45.686620 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-02 04:47:45.686636 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-02 04:47:45.686653 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-02 04:47:45.686675 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-02 04:47:45.686704 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-02 04:47:45.686749 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-02 04:47:45.686770 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-02T04:46:25.000000 | 2026-02-02 04:47:45.686819 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-02 04:47:45.686851 | orchestrator | | accessIPv4 | | 2026-02-02 04:47:45.686864 | orchestrator | | accessIPv6 | | 2026-02-02 04:47:45.686875 | orchestrator | | addresses | test=192.168.112.116, 192.168.200.214 | 2026-02-02 04:47:45.686892 | orchestrator | | config_drive | | 2026-02-02 04:47:45.686904 | orchestrator | | created | 2026-02-02T04:46:00Z | 2026-02-02 04:47:45.686915 | orchestrator | | description | None | 2026-02-02 04:47:45.686926 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-02 04:47:45.686937 | orchestrator | | hostId | 061148064dc676ed57735346fce559a7e92269d25d568a20b2eaf83f | 2026-02-02 04:47:45.686976 | orchestrator | | host_status | None | 2026-02-02 04:47:45.687005 | orchestrator | | id | 51516c7a-ece4-488b-91d3-b74d2ef355f1 | 2026-02-02 04:47:45.687017 | orchestrator | | image | N/A (booted from volume) | 2026-02-02 04:47:45.687029 | orchestrator | | key_name | test | 2026-02-02 04:47:45.687040 | orchestrator | | locked | False | 2026-02-02 04:47:45.687051 | orchestrator | | locked_reason | None | 2026-02-02 04:47:45.687063 | orchestrator | | name | test | 2026-02-02 04:47:45.687074 | orchestrator | | pinned_availability_zone | None | 2026-02-02 04:47:45.687085 | orchestrator | | progress | 0 | 2026-02-02 04:47:45.687097 | orchestrator | | project_id | ae7eef5a8e344177bd9d41429c19a59a | 2026-02-02 04:47:45.687114 | orchestrator | | properties | hostname='test' | 2026-02-02 04:47:45.687139 | orchestrator | | security_groups | name='icmp' | 2026-02-02 04:47:45.687152 | orchestrator | | | name='ssh' | 2026-02-02 04:47:45.687163 | orchestrator | | server_groups | None | 2026-02-02 04:47:45.687178 | orchestrator | | status | ACTIVE | 2026-02-02 04:47:45.687190 | orchestrator | | tags | test | 2026-02-02 04:47:45.687201 | orchestrator | | trusted_image_certificates | None | 2026-02-02 04:47:45.687213 | orchestrator | | updated | 2026-02-02T04:46:49Z | 2026-02-02 04:47:45.687224 | orchestrator | | user_id | 0e41ab947d4741a78443b0c6659a9f39 | 2026-02-02 04:47:45.687235 | orchestrator | | volumes_attached | delete_on_termination='True', id='0eac390f-e46c-4aeb-a17e-6cc413148da6' | 2026-02-02 04:47:45.687252 | orchestrator | | | delete_on_termination='False', id='b0ef784d-0809-47c0-a026-8d07d76912f1' | 2026-02-02 04:47:45.689298 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:46.027920 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-02 04:47:49.044073 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:49.044188 | orchestrator | | Field | Value | 2026-02-02 04:47:49.044228 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:49.044241 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-02 04:47:49.044252 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-02 04:47:49.044264 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-02 04:47:49.044275 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-02 04:47:49.044310 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-02 04:47:49.044322 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-02 04:47:49.044355 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-02 04:47:49.044367 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-02 04:47:49.044379 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-02 04:47:49.044395 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-02 04:47:49.044411 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-02 04:47:49.044431 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-02 04:47:49.044443 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-02 04:47:49.044462 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-02 04:47:49.044474 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-02 04:47:49.044485 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-02T04:46:25.000000 | 2026-02-02 04:47:49.044506 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-02 04:47:49.044518 | orchestrator | | accessIPv4 | | 2026-02-02 04:47:49.044530 | orchestrator | | accessIPv6 | | 2026-02-02 04:47:49.044556 | orchestrator | | addresses | test=192.168.112.113, 192.168.200.151 | 2026-02-02 04:47:49.044577 | orchestrator | | config_drive | | 2026-02-02 04:47:49.044589 | orchestrator | | created | 2026-02-02T04:46:00Z | 2026-02-02 04:47:49.044607 | orchestrator | | description | None | 2026-02-02 04:47:49.044619 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-02 04:47:49.044630 | orchestrator | | hostId | 061148064dc676ed57735346fce559a7e92269d25d568a20b2eaf83f | 2026-02-02 04:47:49.044641 | orchestrator | | host_status | None | 2026-02-02 04:47:49.044659 | orchestrator | | id | 754ca09e-0ddd-479f-84a9-ea56aa71d8a2 | 2026-02-02 04:47:49.044671 | orchestrator | | image | N/A (booted from volume) | 2026-02-02 04:47:49.044682 | orchestrator | | key_name | test | 2026-02-02 04:47:49.044698 | orchestrator | | locked | False | 2026-02-02 04:47:49.044710 | orchestrator | | locked_reason | None | 2026-02-02 04:47:49.044727 | orchestrator | | name | test-1 | 2026-02-02 04:47:49.044738 | orchestrator | | pinned_availability_zone | None | 2026-02-02 04:47:49.044749 | orchestrator | | progress | 0 | 2026-02-02 04:47:49.044761 | orchestrator | | project_id | ae7eef5a8e344177bd9d41429c19a59a | 2026-02-02 04:47:49.044772 | orchestrator | | properties | hostname='test-1' | 2026-02-02 04:47:49.044795 | orchestrator | | security_groups | name='icmp' | 2026-02-02 04:47:49.044816 | orchestrator | | | name='ssh' | 2026-02-02 04:47:49.044832 | orchestrator | | server_groups | None | 2026-02-02 04:47:49.044853 | orchestrator | | status | ACTIVE | 2026-02-02 04:47:49.044865 | orchestrator | | tags | test | 2026-02-02 04:47:49.044893 | orchestrator | | trusted_image_certificates | None | 2026-02-02 04:47:49.044909 | orchestrator | | updated | 2026-02-02T04:46:49Z | 2026-02-02 04:47:49.044927 | orchestrator | | user_id | 0e41ab947d4741a78443b0c6659a9f39 | 2026-02-02 04:47:49.044939 | orchestrator | | volumes_attached | delete_on_termination='True', id='de0b2878-976c-4e9c-96c3-39add4358599' | 2026-02-02 04:47:49.048501 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:49.361634 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-02 04:47:52.218933 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:52.219087 | orchestrator | | Field | Value | 2026-02-02 04:47:52.219130 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:52.219145 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-02 04:47:52.219177 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-02 04:47:52.219190 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-02 04:47:52.219231 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-02 04:47:52.219243 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-02 04:47:52.219255 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-02 04:47:52.219326 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-02 04:47:52.219342 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-02 04:47:52.219353 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-02 04:47:52.219370 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-02 04:47:52.219392 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-02 04:47:52.219403 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-02 04:47:52.219415 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-02 04:47:52.219426 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-02 04:47:52.219437 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-02 04:47:52.219449 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-02T04:46:25.000000 | 2026-02-02 04:47:52.219469 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-02 04:47:52.219484 | orchestrator | | accessIPv4 | | 2026-02-02 04:47:52.219498 | orchestrator | | accessIPv6 | | 2026-02-02 04:47:52.219522 | orchestrator | | addresses | test=192.168.112.139, 192.168.200.235 | 2026-02-02 04:47:52.219537 | orchestrator | | config_drive | | 2026-02-02 04:47:52.219550 | orchestrator | | created | 2026-02-02T04:46:01Z | 2026-02-02 04:47:52.219563 | orchestrator | | description | None | 2026-02-02 04:47:52.219577 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-02 04:47:52.219590 | orchestrator | | hostId | 3f38bb85368600cd3a63d738e96193516ad73e4073c67528c5408558 | 2026-02-02 04:47:52.219603 | orchestrator | | host_status | None | 2026-02-02 04:47:52.219623 | orchestrator | | id | 0083bef7-cb37-4d2f-9784-ec262cc82776 | 2026-02-02 04:47:52.219637 | orchestrator | | image | N/A (booted from volume) | 2026-02-02 04:47:52.219655 | orchestrator | | key_name | test | 2026-02-02 04:47:52.219671 | orchestrator | | locked | False | 2026-02-02 04:47:52.219683 | orchestrator | | locked_reason | None | 2026-02-02 04:47:52.219694 | orchestrator | | name | test-2 | 2026-02-02 04:47:52.219706 | orchestrator | | pinned_availability_zone | None | 2026-02-02 04:47:52.219717 | orchestrator | | progress | 0 | 2026-02-02 04:47:52.219728 | orchestrator | | project_id | ae7eef5a8e344177bd9d41429c19a59a | 2026-02-02 04:47:52.219740 | orchestrator | | properties | hostname='test-2' | 2026-02-02 04:47:52.219758 | orchestrator | | security_groups | name='icmp' | 2026-02-02 04:47:52.219770 | orchestrator | | | name='ssh' | 2026-02-02 04:47:52.219788 | orchestrator | | server_groups | None | 2026-02-02 04:47:52.219804 | orchestrator | | status | ACTIVE | 2026-02-02 04:47:52.219815 | orchestrator | | tags | test | 2026-02-02 04:47:52.219827 | orchestrator | | trusted_image_certificates | None | 2026-02-02 04:47:52.219838 | orchestrator | | updated | 2026-02-02T04:46:50Z | 2026-02-02 04:47:52.219849 | orchestrator | | user_id | 0e41ab947d4741a78443b0c6659a9f39 | 2026-02-02 04:47:52.219861 | orchestrator | | volumes_attached | delete_on_termination='True', id='3b4ef645-cdea-4053-98e1-7bbfb640f059' | 2026-02-02 04:47:52.225105 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:52.529442 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-02 04:47:55.434921 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:55.435123 | orchestrator | | Field | Value | 2026-02-02 04:47:55.435142 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:55.435169 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-02 04:47:55.435182 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-02 04:47:55.435193 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-02 04:47:55.435204 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-02 04:47:55.435215 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-02 04:47:55.435227 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-02 04:47:55.435257 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-02 04:47:55.435277 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-02 04:47:55.435289 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-02 04:47:55.435300 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-02 04:47:55.435324 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-02 04:47:55.435335 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-02 04:47:55.435347 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-02 04:47:55.435358 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-02 04:47:55.435369 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-02 04:47:55.435380 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-02T04:46:25.000000 | 2026-02-02 04:47:55.435405 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-02 04:47:55.435417 | orchestrator | | accessIPv4 | | 2026-02-02 04:47:55.435428 | orchestrator | | accessIPv6 | | 2026-02-02 04:47:55.435440 | orchestrator | | addresses | test=192.168.112.154, 192.168.200.185 | 2026-02-02 04:47:55.435862 | orchestrator | | config_drive | | 2026-02-02 04:47:55.435879 | orchestrator | | created | 2026-02-02T04:46:01Z | 2026-02-02 04:47:55.435891 | orchestrator | | description | None | 2026-02-02 04:47:55.435902 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-02 04:47:55.435913 | orchestrator | | hostId | 3f38bb85368600cd3a63d738e96193516ad73e4073c67528c5408558 | 2026-02-02 04:47:55.435931 | orchestrator | | host_status | None | 2026-02-02 04:47:55.435976 | orchestrator | | id | 435b0e07-ce0e-4f18-9857-398bd9abfd63 | 2026-02-02 04:47:55.435994 | orchestrator | | image | N/A (booted from volume) | 2026-02-02 04:47:55.436005 | orchestrator | | key_name | test | 2026-02-02 04:47:55.436017 | orchestrator | | locked | False | 2026-02-02 04:47:55.436028 | orchestrator | | locked_reason | None | 2026-02-02 04:47:55.436040 | orchestrator | | name | test-3 | 2026-02-02 04:47:55.436052 | orchestrator | | pinned_availability_zone | None | 2026-02-02 04:47:55.436063 | orchestrator | | progress | 0 | 2026-02-02 04:47:55.436075 | orchestrator | | project_id | ae7eef5a8e344177bd9d41429c19a59a | 2026-02-02 04:47:55.436104 | orchestrator | | properties | hostname='test-3' | 2026-02-02 04:47:55.436124 | orchestrator | | security_groups | name='icmp' | 2026-02-02 04:47:55.436141 | orchestrator | | | name='ssh' | 2026-02-02 04:47:55.436153 | orchestrator | | server_groups | None | 2026-02-02 04:47:55.436165 | orchestrator | | status | ACTIVE | 2026-02-02 04:47:55.436176 | orchestrator | | tags | test | 2026-02-02 04:47:55.436188 | orchestrator | | trusted_image_certificates | None | 2026-02-02 04:47:55.436199 | orchestrator | | updated | 2026-02-02T04:46:51Z | 2026-02-02 04:47:55.436210 | orchestrator | | user_id | 0e41ab947d4741a78443b0c6659a9f39 | 2026-02-02 04:47:55.436228 | orchestrator | | volumes_attached | delete_on_termination='True', id='80077503-1eb8-4387-8360-9e01a93586cb' | 2026-02-02 04:47:55.442199 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:55.842426 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-02 04:47:58.764105 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:58.764225 | orchestrator | | Field | Value | 2026-02-02 04:47:58.764245 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:58.764260 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-02 04:47:58.764273 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-02 04:47:58.764287 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-02 04:47:58.764296 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-02 04:47:58.764323 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-02 04:47:58.764332 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-02 04:47:58.764358 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-02 04:47:58.764373 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-02 04:47:58.764382 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-02 04:47:58.764390 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-02 04:47:58.764398 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-02 04:47:58.764407 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-02 04:47:58.764415 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-02 04:47:58.764436 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-02 04:47:58.764444 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-02 04:47:58.764453 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-02T04:46:25.000000 | 2026-02-02 04:47:58.764467 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-02 04:47:58.764480 | orchestrator | | accessIPv4 | | 2026-02-02 04:47:58.764488 | orchestrator | | accessIPv6 | | 2026-02-02 04:47:58.764496 | orchestrator | | addresses | test=192.168.112.149, 192.168.200.176 | 2026-02-02 04:47:58.764504 | orchestrator | | config_drive | | 2026-02-02 04:47:58.764512 | orchestrator | | created | 2026-02-02T04:46:03Z | 2026-02-02 04:47:58.764521 | orchestrator | | description | None | 2026-02-02 04:47:58.764535 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-02 04:47:58.764543 | orchestrator | | hostId | 061148064dc676ed57735346fce559a7e92269d25d568a20b2eaf83f | 2026-02-02 04:47:58.764551 | orchestrator | | host_status | None | 2026-02-02 04:47:58.764565 | orchestrator | | id | df0b3c9f-83b5-4d40-ab9d-f7fac922cbe0 | 2026-02-02 04:47:58.764578 | orchestrator | | image | N/A (booted from volume) | 2026-02-02 04:47:58.764586 | orchestrator | | key_name | test | 2026-02-02 04:47:58.764594 | orchestrator | | locked | False | 2026-02-02 04:47:58.764602 | orchestrator | | locked_reason | None | 2026-02-02 04:47:58.764611 | orchestrator | | name | test-4 | 2026-02-02 04:47:58.764624 | orchestrator | | pinned_availability_zone | None | 2026-02-02 04:47:58.764632 | orchestrator | | progress | 0 | 2026-02-02 04:47:58.764641 | orchestrator | | project_id | ae7eef5a8e344177bd9d41429c19a59a | 2026-02-02 04:47:58.764649 | orchestrator | | properties | hostname='test-4' | 2026-02-02 04:47:58.764663 | orchestrator | | security_groups | name='icmp' | 2026-02-02 04:47:58.764676 | orchestrator | | | name='ssh' | 2026-02-02 04:47:58.764684 | orchestrator | | server_groups | None | 2026-02-02 04:47:58.764692 | orchestrator | | status | ACTIVE | 2026-02-02 04:47:58.764700 | orchestrator | | tags | test | 2026-02-02 04:47:58.764714 | orchestrator | | trusted_image_certificates | None | 2026-02-02 04:47:58.764722 | orchestrator | | updated | 2026-02-02T04:46:52Z | 2026-02-02 04:47:58.764730 | orchestrator | | user_id | 0e41ab947d4741a78443b0c6659a9f39 | 2026-02-02 04:47:58.764738 | orchestrator | | volumes_attached | delete_on_termination='True', id='4d0c2e6d-fff2-4c45-8ac2-3e56b445603b' | 2026-02-02 04:47:58.768413 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-02 04:47:59.038166 | orchestrator | + server_ping 2026-02-02 04:47:59.039604 | orchestrator | ++ tr -d '\r' 2026-02-02 04:47:59.039649 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-02 04:48:01.930656 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-02 04:48:01.930760 | orchestrator | + ping -c3 192.168.112.154 2026-02-02 04:48:01.949274 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2026-02-02 04:48:01.949363 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=9.47 ms 2026-02-02 04:48:02.943771 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=2.06 ms 2026-02-02 04:48:03.944551 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=1.82 ms 2026-02-02 04:48:03.944685 | orchestrator | 2026-02-02 04:48:03.944712 | orchestrator | --- 192.168.112.154 ping statistics --- 2026-02-02 04:48:03.944734 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-02 04:48:03.944803 | orchestrator | rtt min/avg/max/mdev = 1.820/4.449/9.471/3.552 ms 2026-02-02 04:48:03.944819 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-02 04:48:03.944832 | orchestrator | + ping -c3 192.168.112.149 2026-02-02 04:48:03.956128 | orchestrator | PING 192.168.112.149 (192.168.112.149) 56(84) bytes of data. 2026-02-02 04:48:03.956218 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=1 ttl=63 time=7.34 ms 2026-02-02 04:48:04.952693 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=2 ttl=63 time=2.06 ms 2026-02-02 04:48:05.954272 | orchestrator | 64 bytes from 192.168.112.149: icmp_seq=3 ttl=63 time=1.59 ms 2026-02-02 04:48:05.954364 | orchestrator | 2026-02-02 04:48:05.954378 | orchestrator | --- 192.168.112.149 ping statistics --- 2026-02-02 04:48:05.954413 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-02 04:48:05.954424 | orchestrator | rtt min/avg/max/mdev = 1.589/3.663/7.341/2.607 ms 2026-02-02 04:48:05.954962 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-02 04:48:05.954985 | orchestrator | + ping -c3 192.168.112.116 2026-02-02 04:48:05.967033 | orchestrator | PING 192.168.112.116 (192.168.112.116) 56(84) bytes of data. 2026-02-02 04:48:05.967081 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=1 ttl=63 time=7.43 ms 2026-02-02 04:48:06.963826 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=2 ttl=63 time=2.08 ms 2026-02-02 04:48:07.965002 | orchestrator | 64 bytes from 192.168.112.116: icmp_seq=3 ttl=63 time=1.92 ms 2026-02-02 04:48:07.965100 | orchestrator | 2026-02-02 04:48:07.965115 | orchestrator | --- 192.168.112.116 ping statistics --- 2026-02-02 04:48:07.965127 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-02 04:48:07.965237 | orchestrator | rtt min/avg/max/mdev = 1.916/3.808/7.433/2.563 ms 2026-02-02 04:48:07.966135 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-02 04:48:07.966172 | orchestrator | + ping -c3 192.168.112.113 2026-02-02 04:48:07.979541 | orchestrator | PING 192.168.112.113 (192.168.112.113) 56(84) bytes of data. 2026-02-02 04:48:07.979609 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=1 ttl=63 time=8.69 ms 2026-02-02 04:48:08.975493 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=2 ttl=63 time=2.36 ms 2026-02-02 04:48:09.976479 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=3 ttl=63 time=1.51 ms 2026-02-02 04:48:09.976580 | orchestrator | 2026-02-02 04:48:09.976597 | orchestrator | --- 192.168.112.113 ping statistics --- 2026-02-02 04:48:09.976609 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-02 04:48:09.976621 | orchestrator | rtt min/avg/max/mdev = 1.514/4.185/8.685/3.200 ms 2026-02-02 04:48:09.976828 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-02 04:48:09.976849 | orchestrator | + ping -c3 192.168.112.139 2026-02-02 04:48:09.989060 | orchestrator | PING 192.168.112.139 (192.168.112.139) 56(84) bytes of data. 2026-02-02 04:48:09.989147 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=1 ttl=63 time=7.41 ms 2026-02-02 04:48:10.984888 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=2 ttl=63 time=2.62 ms 2026-02-02 04:48:11.985187 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=3 ttl=63 time=1.78 ms 2026-02-02 04:48:11.985300 | orchestrator | 2026-02-02 04:48:11.985320 | orchestrator | --- 192.168.112.139 ping statistics --- 2026-02-02 04:48:11.985341 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-02-02 04:48:11.985359 | orchestrator | rtt min/avg/max/mdev = 1.784/3.939/7.414/2.480 ms 2026-02-02 04:48:11.985396 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-02 04:48:12.086976 | orchestrator | ok: Runtime: 0:10:27.648726 2026-02-02 04:48:12.127710 | 2026-02-02 04:48:12.127843 | TASK [Run tempest] 2026-02-02 04:48:12.662378 | orchestrator | skipping: Conditional result was False 2026-02-02 04:48:12.680933 | 2026-02-02 04:48:12.681085 | TASK [Check prometheus alert status] 2026-02-02 04:48:13.216622 | orchestrator | skipping: Conditional result was False 2026-02-02 04:48:13.232465 | 2026-02-02 04:48:13.232637 | PLAY [Upgrade testbed] 2026-02-02 04:48:13.243783 | 2026-02-02 04:48:13.243891 | TASK [Print next ceph version] 2026-02-02 04:48:13.312102 | orchestrator | ok 2026-02-02 04:48:13.321733 | 2026-02-02 04:48:13.321855 | TASK [Print next openstack version] 2026-02-02 04:48:13.391220 | orchestrator | ok 2026-02-02 04:48:13.403210 | 2026-02-02 04:48:13.403357 | TASK [Print next manager version] 2026-02-02 04:48:13.470439 | orchestrator | ok 2026-02-02 04:48:13.479939 | 2026-02-02 04:48:13.480066 | TASK [Set cloud fact (Zuul deployment)] 2026-02-02 04:48:13.526552 | orchestrator | ok 2026-02-02 04:48:13.537004 | 2026-02-02 04:48:13.537128 | TASK [Set cloud fact (local deployment)] 2026-02-02 04:48:13.561594 | orchestrator | skipping: Conditional result was False 2026-02-02 04:48:13.573503 | 2026-02-02 04:48:13.573624 | TASK [Fetch manager address] 2026-02-02 04:48:13.845908 | orchestrator | ok 2026-02-02 04:48:13.856645 | 2026-02-02 04:48:13.856778 | TASK [Set manager_host address] 2026-02-02 04:48:13.931647 | orchestrator | ok 2026-02-02 04:48:13.941666 | 2026-02-02 04:48:13.941782 | TASK [Run upgrade] 2026-02-02 04:48:14.602261 | orchestrator | + set -e 2026-02-02 04:48:14.602379 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-02 04:48:14.602390 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-02 04:48:14.602400 | orchestrator | + CEPH_VERSION=reef 2026-02-02 04:48:14.602405 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-02 04:48:14.602410 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-02 04:48:14.602420 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-02 04:48:14.611074 | orchestrator | + set -e 2026-02-02 04:48:14.611166 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 04:48:14.611179 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 04:48:14.611192 | orchestrator | ++ INTERACTIVE=false 2026-02-02 04:48:14.611198 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 04:48:14.611211 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 04:48:14.612479 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-02 04:48:14.645010 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-02 04:48:14.646104 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-02 04:48:14.681347 | orchestrator | 2026-02-02 04:48:14.681437 | orchestrator | # UPGRADE MANAGER 2026-02-02 04:48:14.681453 | orchestrator | 2026-02-02 04:48:14.681462 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-02 04:48:14.681470 | orchestrator | + echo 2026-02-02 04:48:14.681478 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-02 04:48:14.681487 | orchestrator | + echo 2026-02-02 04:48:14.681495 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-02 04:48:14.681503 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-02 04:48:14.681511 | orchestrator | + CEPH_VERSION=reef 2026-02-02 04:48:14.681519 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-02 04:48:14.681526 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-02 04:48:14.681534 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-02 04:48:14.685938 | orchestrator | + set -e 2026-02-02 04:48:14.686078 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-02 04:48:14.686102 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-02 04:48:14.691785 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-02 04:48:14.691870 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-02 04:48:14.695296 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-02 04:48:14.699340 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-02 04:48:14.707812 | orchestrator | /opt/configuration ~ 2026-02-02 04:48:14.707898 | orchestrator | + set -e 2026-02-02 04:48:14.707991 | orchestrator | + pushd /opt/configuration 2026-02-02 04:48:14.708013 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-02 04:48:14.708036 | orchestrator | + source /opt/venv/bin/activate 2026-02-02 04:48:14.709060 | orchestrator | ++ deactivate nondestructive 2026-02-02 04:48:14.709092 | orchestrator | ++ '[' -n '' ']' 2026-02-02 04:48:14.709103 | orchestrator | ++ '[' -n '' ']' 2026-02-02 04:48:14.709114 | orchestrator | ++ hash -r 2026-02-02 04:48:14.709126 | orchestrator | ++ '[' -n '' ']' 2026-02-02 04:48:14.709137 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-02 04:48:14.709147 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-02 04:48:14.709159 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-02 04:48:14.709171 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-02 04:48:14.709182 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-02 04:48:14.709193 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-02 04:48:14.709204 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-02 04:48:14.709216 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 04:48:14.709228 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 04:48:14.709239 | orchestrator | ++ export PATH 2026-02-02 04:48:14.709250 | orchestrator | ++ '[' -n '' ']' 2026-02-02 04:48:14.709261 | orchestrator | ++ '[' -z '' ']' 2026-02-02 04:48:14.709272 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-02 04:48:14.709284 | orchestrator | ++ PS1='(venv) ' 2026-02-02 04:48:14.709295 | orchestrator | ++ export PS1 2026-02-02 04:48:14.709306 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-02 04:48:14.709316 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-02 04:48:14.709327 | orchestrator | ++ hash -r 2026-02-02 04:48:14.709349 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-02 04:48:15.801121 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-02 04:48:15.801222 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-02 04:48:15.803239 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-02 04:48:15.805538 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-02 04:48:15.807141 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-02 04:48:15.821083 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-02 04:48:15.823843 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-02 04:48:15.824483 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-02 04:48:15.826605 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-02 04:48:15.876972 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-02 04:48:15.879614 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-02 04:48:15.882375 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-02 04:48:15.884394 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-02 04:48:15.890735 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-02 04:48:16.207119 | orchestrator | ++ which gilt 2026-02-02 04:48:16.208054 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-02 04:48:16.208078 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-02 04:48:16.470782 | orchestrator | osism.cfg-generics: 2026-02-02 04:48:16.571510 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-02 04:48:16.572441 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-02 04:48:16.574158 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-02 04:48:16.574187 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-02 04:48:17.507020 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-02 04:48:17.517250 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-02 04:48:17.978686 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-02 04:48:18.036926 | orchestrator | ~ 2026-02-02 04:48:18.037036 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-02 04:48:18.037051 | orchestrator | + deactivate 2026-02-02 04:48:18.037060 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-02 04:48:18.037070 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 04:48:18.037077 | orchestrator | + export PATH 2026-02-02 04:48:18.037085 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-02 04:48:18.037093 | orchestrator | + '[' -n '' ']' 2026-02-02 04:48:18.037101 | orchestrator | + hash -r 2026-02-02 04:48:18.037109 | orchestrator | + '[' -n '' ']' 2026-02-02 04:48:18.037117 | orchestrator | + unset VIRTUAL_ENV 2026-02-02 04:48:18.037124 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-02 04:48:18.037132 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-02 04:48:18.037139 | orchestrator | + unset -f deactivate 2026-02-02 04:48:18.037147 | orchestrator | + popd 2026-02-02 04:48:18.039543 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-02 04:48:18.039622 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-02 04:48:18.044791 | orchestrator | + set -e 2026-02-02 04:48:18.044853 | orchestrator | + NAMESPACE=kolla/release 2026-02-02 04:48:18.044868 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-02 04:48:18.052119 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-02 04:48:18.058687 | orchestrator | + set -e 2026-02-02 04:48:18.059692 | orchestrator | /opt/configuration ~ 2026-02-02 04:48:18.059717 | orchestrator | + pushd /opt/configuration 2026-02-02 04:48:18.059724 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-02 04:48:18.059732 | orchestrator | + source /opt/venv/bin/activate 2026-02-02 04:48:18.059738 | orchestrator | ++ deactivate nondestructive 2026-02-02 04:48:18.059745 | orchestrator | ++ '[' -n '' ']' 2026-02-02 04:48:18.059752 | orchestrator | ++ '[' -n '' ']' 2026-02-02 04:48:18.059759 | orchestrator | ++ hash -r 2026-02-02 04:48:18.059765 | orchestrator | ++ '[' -n '' ']' 2026-02-02 04:48:18.059772 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-02 04:48:18.059778 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-02 04:48:18.059785 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-02 04:48:18.059823 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-02 04:48:18.059831 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-02 04:48:18.059837 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-02 04:48:18.059848 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-02 04:48:18.059855 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 04:48:18.059864 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 04:48:18.059870 | orchestrator | ++ export PATH 2026-02-02 04:48:18.059877 | orchestrator | ++ '[' -n '' ']' 2026-02-02 04:48:18.059883 | orchestrator | ++ '[' -z '' ']' 2026-02-02 04:48:18.059889 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-02 04:48:18.059895 | orchestrator | ++ PS1='(venv) ' 2026-02-02 04:48:18.059901 | orchestrator | ++ export PS1 2026-02-02 04:48:18.059929 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-02 04:48:18.059936 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-02 04:48:18.059942 | orchestrator | ++ hash -r 2026-02-02 04:48:18.059949 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-02 04:48:18.590189 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-02 04:48:18.591185 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-02 04:48:18.592610 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-02 04:48:18.593799 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-02 04:48:18.595120 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-02 04:48:18.605540 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-02 04:48:18.607226 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-02 04:48:18.608093 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-02 04:48:18.609480 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-02 04:48:18.651005 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-02 04:48:18.652997 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-02 04:48:18.655107 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-02 04:48:18.656241 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-02 04:48:18.661638 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-02 04:48:18.898609 | orchestrator | ++ which gilt 2026-02-02 04:48:18.900005 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-02 04:48:18.900049 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-02 04:48:19.064251 | orchestrator | osism.cfg-generics: 2026-02-02 04:48:19.122809 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-02 04:48:19.122875 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-02 04:48:19.122970 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-02 04:48:19.123042 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-02 04:48:19.698957 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-02 04:48:19.712765 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-02 04:48:20.069581 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-02 04:48:20.134605 | orchestrator | ~ 2026-02-02 04:48:20.134677 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-02 04:48:20.134686 | orchestrator | + deactivate 2026-02-02 04:48:20.134717 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-02 04:48:20.134724 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-02 04:48:20.134730 | orchestrator | + export PATH 2026-02-02 04:48:20.134736 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-02 04:48:20.134742 | orchestrator | + '[' -n '' ']' 2026-02-02 04:48:20.134748 | orchestrator | + hash -r 2026-02-02 04:48:20.134753 | orchestrator | + '[' -n '' ']' 2026-02-02 04:48:20.134759 | orchestrator | + unset VIRTUAL_ENV 2026-02-02 04:48:20.134765 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-02 04:48:20.134771 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-02 04:48:20.134777 | orchestrator | + unset -f deactivate 2026-02-02 04:48:20.134782 | orchestrator | + popd 2026-02-02 04:48:20.138101 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-02 04:48:20.182180 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-02 04:48:20.183023 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-02 04:48:20.251747 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-02 04:48:20.251842 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-02 04:48:20.254763 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-02 04:48:20.260545 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-02 04:48:20.311054 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-02 04:48:20.311255 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-02 04:48:20.396479 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-02 04:48:20.396544 | orchestrator | ++ echo true 2026-02-02 04:48:20.396775 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-02 04:48:20.398977 | orchestrator | +++ semver 2024.2 2024.2 2026-02-02 04:48:20.476527 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-02 04:48:20.477854 | orchestrator | +++ semver 2024.2 2025.1 2026-02-02 04:48:20.528037 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-02 04:48:20.528101 | orchestrator | ++ echo false 2026-02-02 04:48:20.528652 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-02 04:48:20.528701 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-02 04:48:20.528708 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-02 04:48:20.528759 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-02 04:48:20.528825 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-02 04:48:20.535245 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-02 04:48:20.535326 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-02 04:48:20.553983 | orchestrator | export RABBITMQ3TO4=true 2026-02-02 04:48:20.557884 | orchestrator | + osism update manager 2026-02-02 04:48:26.255393 | orchestrator | Collecting uv 2026-02-02 04:48:26.338413 | orchestrator | Downloading uv-0.9.28-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-02 04:48:26.355508 | orchestrator | Downloading uv-0.9.28-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (22.7 MB) 2026-02-02 04:48:27.093879 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 22.7/22.7 MB 34.9 MB/s eta 0:00:00 2026-02-02 04:48:27.151983 | orchestrator | Installing collected packages: uv 2026-02-02 04:48:27.590204 | orchestrator | Successfully installed uv-0.9.28 2026-02-02 04:48:28.217219 | orchestrator | Resolved 11 packages in 343ms 2026-02-02 04:48:28.250819 | orchestrator | Downloading cryptography (4.2MiB) 2026-02-02 04:48:28.250936 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-02 04:48:28.250952 | orchestrator | Downloading ansible (54.5MiB) 2026-02-02 04:48:28.250965 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-02 04:48:28.558202 | orchestrator | Downloaded netaddr 2026-02-02 04:48:28.677055 | orchestrator | Downloaded cryptography 2026-02-02 04:48:28.699070 | orchestrator | Downloaded ansible-core 2026-02-02 04:48:34.073608 | orchestrator | Downloaded ansible 2026-02-02 04:48:34.073772 | orchestrator | Prepared 11 packages in 5.85s 2026-02-02 04:48:34.542677 | orchestrator | Installed 11 packages in 466ms 2026-02-02 04:48:34.542796 | orchestrator | + ansible==11.11.0 2026-02-02 04:48:34.542820 | orchestrator | + ansible-core==2.18.13 2026-02-02 04:48:34.542840 | orchestrator | + cffi==2.0.0 2026-02-02 04:48:34.542861 | orchestrator | + cryptography==46.0.4 2026-02-02 04:48:34.542881 | orchestrator | + jinja2==3.1.6 2026-02-02 04:48:34.542964 | orchestrator | + markupsafe==3.0.3 2026-02-02 04:48:34.542984 | orchestrator | + netaddr==1.3.0 2026-02-02 04:48:34.543004 | orchestrator | + packaging==26.0 2026-02-02 04:48:34.543023 | orchestrator | + pycparser==3.0 2026-02-02 04:48:34.543042 | orchestrator | + pyyaml==6.0.3 2026-02-02 04:48:34.543062 | orchestrator | + resolvelib==1.0.1 2026-02-02 04:48:35.818781 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-200877vfp51oph/tmpzh5r9bat/ansible-collection-serviceskvk48i8o'... 2026-02-02 04:48:37.111417 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-02 04:48:37.111501 | orchestrator | Already on 'main' 2026-02-02 04:48:37.583693 | orchestrator | Starting galaxy collection install process 2026-02-02 04:48:37.583765 | orchestrator | Process install dependency map 2026-02-02 04:48:37.583773 | orchestrator | Starting collection install process 2026-02-02 04:48:37.583779 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-02 04:48:37.583786 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-02 04:48:37.583791 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-02 04:48:38.140664 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-2009617iuqvtjc/tmpwulgupnh/ansible-playbooks-manager042ifjvv'... 2026-02-02 04:48:38.700231 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-02 04:48:38.700330 | orchestrator | Already on 'main' 2026-02-02 04:48:39.023201 | orchestrator | Starting galaxy collection install process 2026-02-02 04:48:39.023295 | orchestrator | Process install dependency map 2026-02-02 04:48:39.023310 | orchestrator | Starting collection install process 2026-02-02 04:48:39.023322 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-02 04:48:39.023335 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-02 04:48:39.023346 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-02 04:48:39.692717 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-02 04:48:39.692848 | orchestrator | -vvvv to see details 2026-02-02 04:48:40.101457 | orchestrator | 2026-02-02 04:48:40.101616 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-02 04:48:40.101633 | orchestrator | 2026-02-02 04:48:40.101650 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-02 04:48:43.875158 | orchestrator | ok: [testbed-manager] 2026-02-02 04:48:43.875271 | orchestrator | 2026-02-02 04:48:43.875291 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-02 04:48:43.952347 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 04:48:43.952437 | orchestrator | 2026-02-02 04:48:43.952474 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-02 04:48:45.875348 | orchestrator | ok: [testbed-manager] 2026-02-02 04:48:45.875455 | orchestrator | 2026-02-02 04:48:45.875472 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-02 04:48:45.925340 | orchestrator | ok: [testbed-manager] 2026-02-02 04:48:45.925427 | orchestrator | 2026-02-02 04:48:45.925444 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-02 04:48:46.004302 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-02 04:48:46.004404 | orchestrator | 2026-02-02 04:48:46.004427 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-02 04:48:50.020341 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-02 04:48:50.020441 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-02 04:48:50.020455 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-02 04:48:50.020478 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-02 04:48:50.020488 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-02 04:48:50.020497 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-02 04:48:50.020507 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-02 04:48:50.020517 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-02 04:48:50.020527 | orchestrator | 2026-02-02 04:48:50.020539 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-02 04:48:51.049709 | orchestrator | ok: [testbed-manager] 2026-02-02 04:48:51.049830 | orchestrator | 2026-02-02 04:48:51.049854 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-02 04:48:51.997528 | orchestrator | ok: [testbed-manager] 2026-02-02 04:48:51.997614 | orchestrator | 2026-02-02 04:48:51.997627 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-02 04:48:52.092827 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-02 04:48:52.092972 | orchestrator | 2026-02-02 04:48:52.092983 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-02 04:48:53.967768 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-02 04:48:53.967952 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-02 04:48:53.967983 | orchestrator | 2026-02-02 04:48:53.968005 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-02 04:48:54.935260 | orchestrator | ok: [testbed-manager] 2026-02-02 04:48:54.935361 | orchestrator | 2026-02-02 04:48:54.935379 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-02 04:48:54.987049 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:48:54.987137 | orchestrator | 2026-02-02 04:48:54.987152 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-02 04:48:55.070011 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-02 04:48:55.070180 | orchestrator | 2026-02-02 04:48:55.070207 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-02 04:48:56.046267 | orchestrator | ok: [testbed-manager] 2026-02-02 04:48:56.046374 | orchestrator | 2026-02-02 04:48:56.046390 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-02 04:48:56.102291 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-02 04:48:56.102421 | orchestrator | 2026-02-02 04:48:56.102448 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-02 04:48:57.997979 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-02 04:48:57.998128 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-02 04:48:57.998143 | orchestrator | ok: [testbed-manager] 2026-02-02 04:48:57.998156 | orchestrator | 2026-02-02 04:48:57.998168 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-02 04:48:58.886329 | orchestrator | ok: [testbed-manager] 2026-02-02 04:48:58.886454 | orchestrator | 2026-02-02 04:48:58.886479 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-02 04:48:58.961142 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:48:58.961268 | orchestrator | 2026-02-02 04:48:58.961299 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-02 04:48:59.064029 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-02 04:48:59.064097 | orchestrator | 2026-02-02 04:48:59.064103 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-02 04:48:59.784247 | orchestrator | ok: [testbed-manager] 2026-02-02 04:48:59.784334 | orchestrator | 2026-02-02 04:48:59.784348 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-02 04:49:01.381226 | orchestrator | ok: [testbed-manager] 2026-02-02 04:49:01.381369 | orchestrator | 2026-02-02 04:49:01.381386 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-02 04:49:03.239486 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-02 04:49:03.239609 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-02 04:49:03.239632 | orchestrator | 2026-02-02 04:49:03.239651 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-02 04:49:04.414353 | orchestrator | changed: [testbed-manager] 2026-02-02 04:49:04.414466 | orchestrator | 2026-02-02 04:49:04.414484 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-02 04:49:04.916043 | orchestrator | ok: [testbed-manager] 2026-02-02 04:49:04.916118 | orchestrator | 2026-02-02 04:49:04.916125 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-02 04:49:05.434960 | orchestrator | ok: [testbed-manager] 2026-02-02 04:49:05.435061 | orchestrator | 2026-02-02 04:49:05.435100 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-02 04:49:05.482448 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:49:05.482538 | orchestrator | 2026-02-02 04:49:05.482554 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-02 04:49:05.547790 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-02 04:49:05.547972 | orchestrator | 2026-02-02 04:49:05.548004 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-02 04:49:05.604559 | orchestrator | ok: [testbed-manager] 2026-02-02 04:49:05.604670 | orchestrator | 2026-02-02 04:49:05.604688 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-02 04:49:08.444996 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-02 04:49:08.445074 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-02 04:49:08.445084 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-02 04:49:08.445091 | orchestrator | 2026-02-02 04:49:08.445098 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-02 04:49:09.418546 | orchestrator | ok: [testbed-manager] 2026-02-02 04:49:09.418643 | orchestrator | 2026-02-02 04:49:09.418658 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-02 04:49:10.525207 | orchestrator | ok: [testbed-manager] 2026-02-02 04:49:10.525308 | orchestrator | 2026-02-02 04:49:10.525325 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-02 04:49:11.507340 | orchestrator | ok: [testbed-manager] 2026-02-02 04:49:11.507442 | orchestrator | 2026-02-02 04:49:11.507458 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-02 04:49:11.577441 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-02 04:49:11.577532 | orchestrator | 2026-02-02 04:49:11.577546 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-02 04:49:11.641155 | orchestrator | ok: [testbed-manager] 2026-02-02 04:49:11.641262 | orchestrator | 2026-02-02 04:49:11.641279 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-02 04:49:12.622381 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-02 04:49:12.622483 | orchestrator | 2026-02-02 04:49:12.622501 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-02 04:49:12.698292 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-02 04:49:12.698391 | orchestrator | 2026-02-02 04:49:12.698406 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-02 04:49:13.677095 | orchestrator | ok: [testbed-manager] 2026-02-02 04:49:13.677205 | orchestrator | 2026-02-02 04:49:13.677223 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-02 04:49:14.779334 | orchestrator | ok: [testbed-manager] 2026-02-02 04:49:14.779473 | orchestrator | 2026-02-02 04:49:14.779490 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-02 04:49:14.827316 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:49:14.827409 | orchestrator | 2026-02-02 04:49:14.827424 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-02 04:49:14.886442 | orchestrator | ok: [testbed-manager] 2026-02-02 04:49:14.886529 | orchestrator | 2026-02-02 04:49:14.886548 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-02 04:49:16.132074 | orchestrator | changed: [testbed-manager] 2026-02-02 04:49:16.132177 | orchestrator | 2026-02-02 04:49:16.132194 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-02 04:50:22.434521 | orchestrator | changed: [testbed-manager] 2026-02-02 04:50:22.434664 | orchestrator | 2026-02-02 04:50:22.434694 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-02 04:50:23.725480 | orchestrator | ok: [testbed-manager] 2026-02-02 04:50:23.725612 | orchestrator | 2026-02-02 04:50:23.725641 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-02 04:50:23.790980 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:50:23.791082 | orchestrator | 2026-02-02 04:50:23.791099 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-02 04:50:24.625031 | orchestrator | ok: [testbed-manager] 2026-02-02 04:50:24.625241 | orchestrator | 2026-02-02 04:50:24.625257 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-02 04:50:24.704850 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:50:24.704947 | orchestrator | 2026-02-02 04:50:24.704963 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-02 04:50:24.704976 | orchestrator | 2026-02-02 04:50:24.704987 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-02 04:50:44.028987 | orchestrator | changed: [testbed-manager] 2026-02-02 04:50:44.029105 | orchestrator | 2026-02-02 04:50:44.029126 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-02 04:51:44.103281 | orchestrator | Pausing for 60 seconds 2026-02-02 04:51:44.103401 | orchestrator | changed: [testbed-manager] 2026-02-02 04:51:44.103417 | orchestrator | 2026-02-02 04:51:44.103430 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-02 04:51:44.160246 | orchestrator | ok: [testbed-manager] 2026-02-02 04:51:44.160376 | orchestrator | 2026-02-02 04:51:44.160397 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-02 04:51:48.405171 | orchestrator | changed: [testbed-manager] 2026-02-02 04:51:48.405273 | orchestrator | 2026-02-02 04:51:48.405286 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-02 04:52:51.366119 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-02 04:52:51.366208 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-02 04:52:51.366217 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-02 04:52:51.366224 | orchestrator | changed: [testbed-manager] 2026-02-02 04:52:51.366232 | orchestrator | 2026-02-02 04:52:51.366238 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-02 04:53:03.415761 | orchestrator | changed: [testbed-manager] 2026-02-02 04:53:03.415865 | orchestrator | 2026-02-02 04:53:03.415881 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-02 04:53:03.493530 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-02 04:53:03.493686 | orchestrator | 2026-02-02 04:53:03.493707 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-02 04:53:03.493722 | orchestrator | 2026-02-02 04:53:03.493736 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-02 04:53:03.543573 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:53:03.543726 | orchestrator | 2026-02-02 04:53:03.543743 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-02 04:53:03.609165 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-02 04:53:03.609266 | orchestrator | 2026-02-02 04:53:03.609304 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-02 04:53:04.745466 | orchestrator | changed: [testbed-manager] 2026-02-02 04:53:04.745532 | orchestrator | 2026-02-02 04:53:04.745539 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-02 04:53:08.127063 | orchestrator | ok: [testbed-manager] 2026-02-02 04:53:08.127178 | orchestrator | 2026-02-02 04:53:08.127196 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-02 04:53:08.200773 | orchestrator | ok: [testbed-manager] => { 2026-02-02 04:53:08.200855 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-02 04:53:08.200864 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-02 04:53:08.200870 | orchestrator | "Checking running containers against expected versions...", 2026-02-02 04:53:08.200876 | orchestrator | "", 2026-02-02 04:53:08.200882 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-02 04:53:08.200887 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-02 04:53:08.200893 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.200898 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-02 04:53:08.200904 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.200909 | orchestrator | "", 2026-02-02 04:53:08.200914 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-02 04:53:08.200919 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-02 04:53:08.200924 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.200929 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-02 04:53:08.200934 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.200939 | orchestrator | "", 2026-02-02 04:53:08.200944 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-02 04:53:08.200949 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-02 04:53:08.200954 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.200958 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-02 04:53:08.200963 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.200968 | orchestrator | "", 2026-02-02 04:53:08.200973 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-02 04:53:08.200978 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-02 04:53:08.200983 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.200988 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-02 04:53:08.200993 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.200997 | orchestrator | "", 2026-02-02 04:53:08.201002 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-02 04:53:08.201007 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-02 04:53:08.201012 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.201016 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-02 04:53:08.201021 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.201025 | orchestrator | "", 2026-02-02 04:53:08.201030 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-02 04:53:08.201049 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-02 04:53:08.201054 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.201059 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-02 04:53:08.201063 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.201068 | orchestrator | "", 2026-02-02 04:53:08.201073 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-02 04:53:08.201078 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-02 04:53:08.201082 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.201087 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-02 04:53:08.201091 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.201096 | orchestrator | "", 2026-02-02 04:53:08.201100 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-02 04:53:08.201105 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-02 04:53:08.201109 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.201120 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-02 04:53:08.201125 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.201129 | orchestrator | "", 2026-02-02 04:53:08.201134 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-02 04:53:08.201138 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-02 04:53:08.201143 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.201147 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-02 04:53:08.201152 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.201156 | orchestrator | "", 2026-02-02 04:53:08.201164 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-02 04:53:08.201169 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-02 04:53:08.201174 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.201178 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-02 04:53:08.201183 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.201187 | orchestrator | "", 2026-02-02 04:53:08.201192 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-02 04:53:08.201197 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-02 04:53:08.201201 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.201206 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-02 04:53:08.201210 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.201215 | orchestrator | "", 2026-02-02 04:53:08.201219 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-02 04:53:08.201224 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-02 04:53:08.201228 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.201233 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-02 04:53:08.201237 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.201242 | orchestrator | "", 2026-02-02 04:53:08.201246 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-02 04:53:08.201251 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-02 04:53:08.201255 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.201260 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-02 04:53:08.201264 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.201269 | orchestrator | "", 2026-02-02 04:53:08.201273 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-02 04:53:08.201278 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-02 04:53:08.201284 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.201289 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-02 04:53:08.201306 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.201312 | orchestrator | "", 2026-02-02 04:53:08.201317 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-02 04:53:08.201323 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-02 04:53:08.201373 | orchestrator | " Enabled: true", 2026-02-02 04:53:08.201378 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-02 04:53:08.201383 | orchestrator | " Status: ✅ MATCH", 2026-02-02 04:53:08.201389 | orchestrator | "", 2026-02-02 04:53:08.201394 | orchestrator | "=== Summary ===", 2026-02-02 04:53:08.201399 | orchestrator | "Errors (version mismatches): 0", 2026-02-02 04:53:08.201405 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-02 04:53:08.201410 | orchestrator | "", 2026-02-02 04:53:08.201415 | orchestrator | "✅ All running containers match expected versions!" 2026-02-02 04:53:08.201421 | orchestrator | ] 2026-02-02 04:53:08.201426 | orchestrator | } 2026-02-02 04:53:08.201432 | orchestrator | 2026-02-02 04:53:08.201437 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-02 04:53:08.253827 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:53:08.253893 | orchestrator | 2026-02-02 04:53:08.253900 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:53:08.253906 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-02 04:53:08.253910 | orchestrator | 2026-02-02 04:53:20.722238 | orchestrator | 2026-02-02 04:53:20 | INFO  | Task 9596f6d3-b83e-4d9d-98b2-d97eda431f30 (sync inventory) is running in background. Output coming soon. 2026-02-02 04:53:49.816059 | orchestrator | 2026-02-02 04:53:22 | INFO  | Starting group_vars file reorganization 2026-02-02 04:53:49.816174 | orchestrator | 2026-02-02 04:53:22 | INFO  | Moved 0 file(s) to their respective directories 2026-02-02 04:53:49.816191 | orchestrator | 2026-02-02 04:53:22 | INFO  | Group_vars file reorganization completed 2026-02-02 04:53:49.816224 | orchestrator | 2026-02-02 04:53:25 | INFO  | Starting variable preparation from inventory 2026-02-02 04:53:49.816236 | orchestrator | 2026-02-02 04:53:27 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-02 04:53:49.816248 | orchestrator | 2026-02-02 04:53:27 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-02 04:53:49.816259 | orchestrator | 2026-02-02 04:53:27 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-02 04:53:49.816270 | orchestrator | 2026-02-02 04:53:27 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-02 04:53:49.816281 | orchestrator | 2026-02-02 04:53:27 | INFO  | Variable preparation completed 2026-02-02 04:53:49.816291 | orchestrator | 2026-02-02 04:53:29 | INFO  | Starting inventory overwrite handling 2026-02-02 04:53:49.816302 | orchestrator | 2026-02-02 04:53:29 | INFO  | Handling group overwrites in 99-overwrite 2026-02-02 04:53:49.816313 | orchestrator | 2026-02-02 04:53:29 | INFO  | Removing group frr:children from 60-generic 2026-02-02 04:53:49.816323 | orchestrator | 2026-02-02 04:53:29 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-02 04:53:49.816334 | orchestrator | 2026-02-02 04:53:29 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-02 04:53:49.816345 | orchestrator | 2026-02-02 04:53:29 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-02 04:53:49.816356 | orchestrator | 2026-02-02 04:53:29 | INFO  | Handling group overwrites in 20-roles 2026-02-02 04:53:49.816367 | orchestrator | 2026-02-02 04:53:29 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-02 04:53:49.816378 | orchestrator | 2026-02-02 04:53:29 | INFO  | Removed 5 group(s) in total 2026-02-02 04:53:49.816389 | orchestrator | 2026-02-02 04:53:29 | INFO  | Inventory overwrite handling completed 2026-02-02 04:53:49.816400 | orchestrator | 2026-02-02 04:53:31 | INFO  | Starting merge of inventory files 2026-02-02 04:53:49.816410 | orchestrator | 2026-02-02 04:53:31 | INFO  | Inventory files merged successfully 2026-02-02 04:53:49.816444 | orchestrator | 2026-02-02 04:53:36 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-02 04:53:49.816455 | orchestrator | 2026-02-02 04:53:48 | INFO  | Successfully wrote ClusterShell configuration 2026-02-02 04:53:50.166404 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-02 04:53:50.166521 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-02 04:53:50.166545 | orchestrator | + local max_attempts=60 2026-02-02 04:53:50.166566 | orchestrator | + local name=kolla-ansible 2026-02-02 04:53:50.166585 | orchestrator | + local attempt_num=1 2026-02-02 04:53:50.166875 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-02 04:53:50.202584 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 04:53:50.202746 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-02 04:53:50.202774 | orchestrator | + local max_attempts=60 2026-02-02 04:53:50.202795 | orchestrator | + local name=osism-ansible 2026-02-02 04:53:50.202813 | orchestrator | + local attempt_num=1 2026-02-02 04:53:50.203252 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-02 04:53:50.229571 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-02 04:53:50.229692 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-02 04:53:50.370460 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-02 04:53:50.370552 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-02 04:53:50.370567 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-02 04:53:50.370579 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-02 04:53:50.370595 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-02 04:53:50.370680 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-02 04:53:50.370695 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-02 04:53:50.370707 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-02-02 04:53:50.370718 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 15 seconds ago 2026-02-02 04:53:50.370729 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-02 04:53:50.370740 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-02 04:53:50.370750 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-02 04:53:50.370761 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-02 04:53:50.370799 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-02 04:53:50.370810 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-02 04:53:50.370821 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-02 04:53:50.376091 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-02 04:53:50.376132 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-02 04:53:50.376143 | orchestrator | + osism apply facts 2026-02-02 04:54:02.581795 | orchestrator | 2026-02-02 04:54:02 | INFO  | Task c2cc0983-669c-42bc-8a24-bb0f28d62b89 (facts) was prepared for execution. 2026-02-02 04:54:02.581909 | orchestrator | 2026-02-02 04:54:02 | INFO  | It takes a moment until task c2cc0983-669c-42bc-8a24-bb0f28d62b89 (facts) has been started and output is visible here. 2026-02-02 04:54:21.294196 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-02 04:54:21.294314 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-02 04:54:21.294343 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-02 04:54:21.294355 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-02 04:54:21.294377 | orchestrator | 2026-02-02 04:54:21.294389 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-02 04:54:21.294407 | orchestrator | 2026-02-02 04:54:21.294426 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-02 04:54:21.294445 | orchestrator | Monday 02 February 2026 04:54:08 +0000 (0:00:01.904) 0:00:01.904 ******* 2026-02-02 04:54:21.294464 | orchestrator | ok: [testbed-manager] 2026-02-02 04:54:21.294484 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:54:21.294539 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:54:21.294558 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:54:21.294765 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:54:21.294787 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:54:21.294800 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:54:21.294815 | orchestrator | 2026-02-02 04:54:21.294828 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-02 04:54:21.294841 | orchestrator | Monday 02 February 2026 04:54:11 +0000 (0:00:02.226) 0:00:04.131 ******* 2026-02-02 04:54:21.294853 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:54:21.294866 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:54:21.294901 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:54:21.294914 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:54:21.294931 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:54:21.294944 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:54:21.294957 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:54:21.294969 | orchestrator | 2026-02-02 04:54:21.294982 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-02 04:54:21.294994 | orchestrator | 2026-02-02 04:54:21.295007 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-02 04:54:21.295020 | orchestrator | Monday 02 February 2026 04:54:12 +0000 (0:00:01.789) 0:00:05.921 ******* 2026-02-02 04:54:21.295034 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:54:21.295046 | orchestrator | ok: [testbed-manager] 2026-02-02 04:54:21.295058 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:54:21.295072 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:54:21.295110 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:54:21.295121 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:54:21.295132 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:54:21.295142 | orchestrator | 2026-02-02 04:54:21.295153 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-02 04:54:21.295164 | orchestrator | 2026-02-02 04:54:21.295175 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-02 04:54:21.295185 | orchestrator | Monday 02 February 2026 04:54:19 +0000 (0:00:06.074) 0:00:11.996 ******* 2026-02-02 04:54:21.295196 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:54:21.295207 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:54:21.295217 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:54:21.295228 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:54:21.295239 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:54:21.295249 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:54:21.295259 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:54:21.295270 | orchestrator | 2026-02-02 04:54:21.295281 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:54:21.295292 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:54:21.295304 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:54:21.295315 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:54:21.295326 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:54:21.295336 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:54:21.295347 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:54:21.295358 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:54:21.295369 | orchestrator | 2026-02-02 04:54:21.295380 | orchestrator | 2026-02-02 04:54:21.295391 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:54:21.295401 | orchestrator | Monday 02 February 2026 04:54:20 +0000 (0:00:01.720) 0:00:13.716 ******* 2026-02-02 04:54:21.295412 | orchestrator | =============================================================================== 2026-02-02 04:54:21.295423 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.08s 2026-02-02 04:54:21.295433 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.23s 2026-02-02 04:54:21.295444 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.79s 2026-02-02 04:54:21.295454 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.72s 2026-02-02 04:54:21.651305 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-02 04:54:21.742844 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-02 04:54:21.743137 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-02 04:54:21.775127 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-02 04:54:21.775228 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-02 04:54:21.779454 | orchestrator | + set -e 2026-02-02 04:54:21.779535 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-02 04:54:21.779549 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-02 04:54:21.785549 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-02 04:54:21.795178 | orchestrator | 2026-02-02 04:54:21.795254 | orchestrator | # UPGRADE SERVICES 2026-02-02 04:54:21.795295 | orchestrator | 2026-02-02 04:54:21.795308 | orchestrator | + set -e 2026-02-02 04:54:21.795320 | orchestrator | + echo 2026-02-02 04:54:21.795331 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-02 04:54:21.795342 | orchestrator | + echo 2026-02-02 04:54:21.795353 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 04:54:21.796246 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 04:54:21.796276 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 04:54:21.796287 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 04:54:21.796298 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 04:54:21.796309 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 04:54:21.796322 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 04:54:21.796333 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 04:54:21.796344 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 04:54:21.796355 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-02 04:54:21.796366 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-02 04:54:21.796425 | orchestrator | ++ export ARA=false 2026-02-02 04:54:21.796437 | orchestrator | ++ ARA=false 2026-02-02 04:54:21.796448 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 04:54:21.796459 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 04:54:21.796470 | orchestrator | ++ export TEMPEST=false 2026-02-02 04:54:21.796480 | orchestrator | ++ TEMPEST=false 2026-02-02 04:54:21.796491 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 04:54:21.796501 | orchestrator | ++ IS_ZUUL=true 2026-02-02 04:54:21.796512 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 04:54:21.796603 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 04:54:21.796616 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 04:54:21.796627 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 04:54:21.796637 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 04:54:21.796648 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 04:54:21.796659 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 04:54:21.796670 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 04:54:21.796784 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 04:54:21.796801 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 04:54:21.796812 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-02 04:54:21.796822 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-02 04:54:21.796851 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-02 04:54:21.796862 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-02 04:54:21.796886 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-02 04:54:21.805427 | orchestrator | + set -e 2026-02-02 04:54:21.805496 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 04:54:21.806228 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 04:54:21.806261 | orchestrator | ++ INTERACTIVE=false 2026-02-02 04:54:21.806273 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 04:54:21.806286 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 04:54:21.806298 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 04:54:21.806310 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 04:54:21.806323 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 04:54:21.806458 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 04:54:21.806472 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 04:54:21.806483 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 04:54:21.806494 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 04:54:21.806505 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 04:54:21.806516 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 04:54:21.806527 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-02 04:54:21.806538 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-02 04:54:21.806728 | orchestrator | 2026-02-02 04:54:21.806747 | orchestrator | # PULL IMAGES 2026-02-02 04:54:21.806758 | orchestrator | 2026-02-02 04:54:21.806769 | orchestrator | ++ export ARA=false 2026-02-02 04:54:21.806780 | orchestrator | ++ ARA=false 2026-02-02 04:54:21.806791 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 04:54:21.806802 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 04:54:21.806812 | orchestrator | ++ export TEMPEST=false 2026-02-02 04:54:21.806824 | orchestrator | ++ TEMPEST=false 2026-02-02 04:54:21.806835 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 04:54:21.806846 | orchestrator | ++ IS_ZUUL=true 2026-02-02 04:54:21.806856 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 04:54:21.806867 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 04:54:21.806878 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 04:54:21.806889 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 04:54:21.806899 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 04:54:21.806910 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 04:54:21.806920 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 04:54:21.806931 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 04:54:21.806964 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 04:54:21.806975 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 04:54:21.806986 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-02 04:54:21.806997 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-02 04:54:21.807007 | orchestrator | + echo 2026-02-02 04:54:21.807018 | orchestrator | + echo '# PULL IMAGES' 2026-02-02 04:54:21.807029 | orchestrator | + echo 2026-02-02 04:54:21.808219 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-02 04:54:21.875812 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-02 04:54:21.875905 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-02 04:54:23.890082 | orchestrator | 2026-02-02 04:54:23 | INFO  | Trying to run play pull-images in environment custom 2026-02-02 04:54:34.058356 | orchestrator | 2026-02-02 04:54:34 | INFO  | Task 4b58c670-b5f8-4222-9564-565a2aae8ae8 (pull-images) was prepared for execution. 2026-02-02 04:54:34.058433 | orchestrator | 2026-02-02 04:54:34 | INFO  | Task 4b58c670-b5f8-4222-9564-565a2aae8ae8 is running in background. No more output. Check ARA for logs. 2026-02-02 04:54:34.425458 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-02 04:54:34.436193 | orchestrator | + set -e 2026-02-02 04:54:34.436238 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 04:54:34.436252 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 04:54:34.436266 | orchestrator | ++ INTERACTIVE=false 2026-02-02 04:54:34.436277 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 04:54:34.436288 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 04:54:34.436299 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-02 04:54:34.437670 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-02 04:54:34.450749 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-02 04:54:34.450839 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-02 04:54:34.451719 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-02 04:54:34.506433 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-02 04:54:34.506511 | orchestrator | + osism apply frr 2026-02-02 04:54:46.731993 | orchestrator | 2026-02-02 04:54:46 | INFO  | Task bdceffb8-52fb-4add-82ba-b300209d8767 (frr) was prepared for execution. 2026-02-02 04:54:46.732104 | orchestrator | 2026-02-02 04:54:46 | INFO  | It takes a moment until task bdceffb8-52fb-4add-82ba-b300209d8767 (frr) has been started and output is visible here. 2026-02-02 04:55:17.628404 | orchestrator | 2026-02-02 04:55:17.628579 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-02 04:55:17.628599 | orchestrator | 2026-02-02 04:55:17.628612 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-02 04:55:17.628625 | orchestrator | Monday 02 February 2026 04:54:53 +0000 (0:00:02.237) 0:00:02.237 ******* 2026-02-02 04:55:17.628637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 04:55:17.628650 | orchestrator | 2026-02-02 04:55:17.628663 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-02 04:55:17.628674 | orchestrator | Monday 02 February 2026 04:54:55 +0000 (0:00:01.858) 0:00:04.095 ******* 2026-02-02 04:55:17.628686 | orchestrator | ok: [testbed-manager] 2026-02-02 04:55:17.628699 | orchestrator | 2026-02-02 04:55:17.628711 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-02 04:55:17.628723 | orchestrator | Monday 02 February 2026 04:54:57 +0000 (0:00:02.091) 0:00:06.187 ******* 2026-02-02 04:55:17.628735 | orchestrator | ok: [testbed-manager] 2026-02-02 04:55:17.628746 | orchestrator | 2026-02-02 04:55:17.628758 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-02 04:55:17.628769 | orchestrator | Monday 02 February 2026 04:55:00 +0000 (0:00:03.032) 0:00:09.220 ******* 2026-02-02 04:55:17.628781 | orchestrator | ok: [testbed-manager] 2026-02-02 04:55:17.628793 | orchestrator | 2026-02-02 04:55:17.628804 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-02 04:55:17.628816 | orchestrator | Monday 02 February 2026 04:55:02 +0000 (0:00:01.900) 0:00:11.120 ******* 2026-02-02 04:55:17.628850 | orchestrator | ok: [testbed-manager] 2026-02-02 04:55:17.628862 | orchestrator | 2026-02-02 04:55:17.628874 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-02 04:55:17.628885 | orchestrator | Monday 02 February 2026 04:55:04 +0000 (0:00:01.905) 0:00:13.026 ******* 2026-02-02 04:55:17.628896 | orchestrator | ok: [testbed-manager] 2026-02-02 04:55:17.628908 | orchestrator | 2026-02-02 04:55:17.628919 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-02 04:55:17.628932 | orchestrator | Monday 02 February 2026 04:55:07 +0000 (0:00:02.433) 0:00:15.460 ******* 2026-02-02 04:55:17.628944 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:55:17.628955 | orchestrator | 2026-02-02 04:55:17.628966 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-02 04:55:17.628976 | orchestrator | Monday 02 February 2026 04:55:08 +0000 (0:00:01.119) 0:00:16.579 ******* 2026-02-02 04:55:17.628987 | orchestrator | skipping: [testbed-manager] 2026-02-02 04:55:17.628998 | orchestrator | 2026-02-02 04:55:17.629009 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-02 04:55:17.629020 | orchestrator | Monday 02 February 2026 04:55:09 +0000 (0:00:01.130) 0:00:17.710 ******* 2026-02-02 04:55:17.629031 | orchestrator | ok: [testbed-manager] 2026-02-02 04:55:17.629043 | orchestrator | 2026-02-02 04:55:17.629054 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-02 04:55:17.629065 | orchestrator | Monday 02 February 2026 04:55:11 +0000 (0:00:01.929) 0:00:19.639 ******* 2026-02-02 04:55:17.629077 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-02 04:55:17.629108 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-02 04:55:17.629123 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-02 04:55:17.629136 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-02 04:55:17.629147 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-02 04:55:17.629159 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-02 04:55:17.629171 | orchestrator | 2026-02-02 04:55:17.629184 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-02 04:55:17.629197 | orchestrator | Monday 02 February 2026 04:55:14 +0000 (0:00:03.499) 0:00:23.139 ******* 2026-02-02 04:55:17.629210 | orchestrator | ok: [testbed-manager] 2026-02-02 04:55:17.629221 | orchestrator | 2026-02-02 04:55:17.629232 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 04:55:17.629285 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 04:55:17.629300 | orchestrator | 2026-02-02 04:55:17.629314 | orchestrator | 2026-02-02 04:55:17.629327 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 04:55:17.629339 | orchestrator | Monday 02 February 2026 04:55:17 +0000 (0:00:02.505) 0:00:25.644 ******* 2026-02-02 04:55:17.629351 | orchestrator | =============================================================================== 2026-02-02 04:55:17.629365 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.50s 2026-02-02 04:55:17.629377 | orchestrator | osism.services.frr : Install frr package -------------------------------- 3.03s 2026-02-02 04:55:17.629390 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.51s 2026-02-02 04:55:17.629403 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.43s 2026-02-02 04:55:17.629415 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.09s 2026-02-02 04:55:17.629427 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.93s 2026-02-02 04:55:17.629439 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.91s 2026-02-02 04:55:17.629462 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.90s 2026-02-02 04:55:17.629494 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.86s 2026-02-02 04:55:17.629507 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.13s 2026-02-02 04:55:17.629535 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.12s 2026-02-02 04:55:17.992997 | orchestrator | + osism apply kubernetes 2026-02-02 04:55:20.204764 | orchestrator | 2026-02-02 04:55:20 | INFO  | Task ae6b6ab3-5684-4cb5-9b0f-cb7332daa52b (kubernetes) was prepared for execution. 2026-02-02 04:55:20.204835 | orchestrator | 2026-02-02 04:55:20 | INFO  | It takes a moment until task ae6b6ab3-5684-4cb5-9b0f-cb7332daa52b (kubernetes) has been started and output is visible here. 2026-02-02 04:56:07.223178 | orchestrator | 2026-02-02 04:56:07.223269 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-02 04:56:07.223280 | orchestrator | 2026-02-02 04:56:07.223287 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-02 04:56:07.223296 | orchestrator | Monday 02 February 2026 04:55:26 +0000 (0:00:01.815) 0:00:01.815 ******* 2026-02-02 04:56:07.223303 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:56:07.223311 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:56:07.223318 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:56:07.223325 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:56:07.223332 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:56:07.223339 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:56:07.223346 | orchestrator | 2026-02-02 04:56:07.223353 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-02 04:56:07.223360 | orchestrator | Monday 02 February 2026 04:55:31 +0000 (0:00:05.039) 0:00:06.855 ******* 2026-02-02 04:56:07.223367 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:56:07.223375 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:56:07.223382 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:56:07.223388 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:56:07.223395 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:56:07.223402 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:56:07.223409 | orchestrator | 2026-02-02 04:56:07.223416 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-02 04:56:07.223423 | orchestrator | Monday 02 February 2026 04:55:33 +0000 (0:00:02.207) 0:00:09.062 ******* 2026-02-02 04:56:07.223430 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:56:07.223437 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:56:07.223444 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:56:07.223451 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:56:07.223458 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:56:07.223465 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:56:07.223472 | orchestrator | 2026-02-02 04:56:07.223479 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-02 04:56:07.223525 | orchestrator | Monday 02 February 2026 04:55:35 +0000 (0:00:01.964) 0:00:11.026 ******* 2026-02-02 04:56:07.223532 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:56:07.223539 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:56:07.223546 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:56:07.223553 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:56:07.223559 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:56:07.223566 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:56:07.223573 | orchestrator | 2026-02-02 04:56:07.223580 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-02 04:56:07.223588 | orchestrator | Monday 02 February 2026 04:55:38 +0000 (0:00:02.694) 0:00:13.721 ******* 2026-02-02 04:56:07.223595 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:56:07.223601 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:56:07.223608 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:56:07.223615 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:56:07.223641 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:56:07.223647 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:56:07.223654 | orchestrator | 2026-02-02 04:56:07.223661 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-02 04:56:07.223668 | orchestrator | Monday 02 February 2026 04:55:41 +0000 (0:00:02.521) 0:00:16.243 ******* 2026-02-02 04:56:07.223675 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:56:07.223682 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:56:07.223688 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:56:07.223695 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:56:07.223702 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:56:07.223709 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:56:07.223716 | orchestrator | 2026-02-02 04:56:07.223723 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-02 04:56:07.223731 | orchestrator | Monday 02 February 2026 04:55:44 +0000 (0:00:03.169) 0:00:19.412 ******* 2026-02-02 04:56:07.223741 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:56:07.223749 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:56:07.223755 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:56:07.223761 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:56:07.223767 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:56:07.223774 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:56:07.223784 | orchestrator | 2026-02-02 04:56:07.223794 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-02 04:56:07.223804 | orchestrator | Monday 02 February 2026 04:55:46 +0000 (0:00:02.074) 0:00:21.486 ******* 2026-02-02 04:56:07.223814 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:56:07.223824 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:56:07.223834 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:56:07.223845 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:56:07.223861 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:56:07.223871 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:56:07.223881 | orchestrator | 2026-02-02 04:56:07.223891 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-02 04:56:07.223901 | orchestrator | Monday 02 February 2026 04:55:48 +0000 (0:00:02.142) 0:00:23.629 ******* 2026-02-02 04:56:07.223911 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 04:56:07.223921 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 04:56:07.223931 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:56:07.223941 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 04:56:07.223951 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 04:56:07.223961 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:56:07.223972 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 04:56:07.223981 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 04:56:07.223991 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:56:07.224001 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 04:56:07.224011 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 04:56:07.224022 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:56:07.224045 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 04:56:07.224055 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 04:56:07.224064 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:56:07.224071 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-02 04:56:07.224078 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-02 04:56:07.224085 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:56:07.224092 | orchestrator | 2026-02-02 04:56:07.224105 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-02 04:56:07.224112 | orchestrator | Monday 02 February 2026 04:55:50 +0000 (0:00:02.260) 0:00:25.889 ******* 2026-02-02 04:56:07.224118 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:56:07.224125 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:56:07.224132 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:56:07.224139 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:56:07.224146 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:56:07.224153 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:56:07.224160 | orchestrator | 2026-02-02 04:56:07.224166 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-02 04:56:07.224174 | orchestrator | Monday 02 February 2026 04:55:53 +0000 (0:00:02.488) 0:00:28.378 ******* 2026-02-02 04:56:07.224181 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:56:07.224188 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:56:07.224195 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:56:07.224202 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:56:07.224208 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:56:07.224215 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:56:07.224222 | orchestrator | 2026-02-02 04:56:07.224229 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-02 04:56:07.224236 | orchestrator | Monday 02 February 2026 04:55:55 +0000 (0:00:02.101) 0:00:30.480 ******* 2026-02-02 04:56:07.224243 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:56:07.224249 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:56:07.224256 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:56:07.224263 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:56:07.224270 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:56:07.224277 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:56:07.224283 | orchestrator | 2026-02-02 04:56:07.224290 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-02 04:56:07.224297 | orchestrator | Monday 02 February 2026 04:55:58 +0000 (0:00:02.756) 0:00:33.236 ******* 2026-02-02 04:56:07.224304 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:56:07.224311 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:56:07.224317 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:56:07.224324 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:56:07.224331 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:56:07.224337 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:56:07.224344 | orchestrator | 2026-02-02 04:56:07.224351 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-02 04:56:07.224358 | orchestrator | Monday 02 February 2026 04:56:00 +0000 (0:00:02.038) 0:00:35.275 ******* 2026-02-02 04:56:07.224365 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:56:07.224372 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:56:07.224378 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:56:07.224385 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:56:07.224392 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:56:07.224399 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:56:07.224406 | orchestrator | 2026-02-02 04:56:07.224413 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-02 04:56:07.224421 | orchestrator | Monday 02 February 2026 04:56:02 +0000 (0:00:02.435) 0:00:37.710 ******* 2026-02-02 04:56:07.224428 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:56:07.224438 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:56:07.224445 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:56:07.224452 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:56:07.224459 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:56:07.224465 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:56:07.224472 | orchestrator | 2026-02-02 04:56:07.224479 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-02 04:56:07.224509 | orchestrator | Monday 02 February 2026 04:56:04 +0000 (0:00:01.916) 0:00:39.627 ******* 2026-02-02 04:56:07.224521 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-02 04:56:07.224528 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-02 04:56:07.224536 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:56:07.224542 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-02 04:56:07.224549 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-02 04:56:07.224556 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:56:07.224563 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-02 04:56:07.224569 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-02 04:56:07.224575 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:56:07.224581 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-02 04:56:07.224587 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-02 04:56:07.224594 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:56:07.224601 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-02 04:56:07.224608 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-02 04:56:07.224614 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:56:07.224621 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-02 04:56:07.224628 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-02 04:56:07.224635 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:56:07.224642 | orchestrator | 2026-02-02 04:56:07.224649 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-02 04:56:07.224655 | orchestrator | Monday 02 February 2026 04:56:06 +0000 (0:00:02.120) 0:00:41.747 ******* 2026-02-02 04:56:07.224662 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:56:07.224669 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:56:07.224681 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:57:48.761091 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:57:48.761208 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:57:48.761223 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:57:48.761235 | orchestrator | 2026-02-02 04:57:48.761248 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-02 04:57:48.761260 | orchestrator | Monday 02 February 2026 04:56:08 +0000 (0:00:01.947) 0:00:43.695 ******* 2026-02-02 04:57:48.761272 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:57:48.761283 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:57:48.761293 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:57:48.761304 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:57:48.761315 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:57:48.761326 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:57:48.761337 | orchestrator | 2026-02-02 04:57:48.761348 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-02 04:57:48.761359 | orchestrator | 2026-02-02 04:57:48.761370 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-02 04:57:48.761382 | orchestrator | Monday 02 February 2026 04:56:11 +0000 (0:00:02.924) 0:00:46.619 ******* 2026-02-02 04:57:48.761393 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:57:48.761405 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:57:48.761523 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:57:48.761541 | orchestrator | 2026-02-02 04:57:48.761558 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-02 04:57:48.761569 | orchestrator | Monday 02 February 2026 04:56:13 +0000 (0:00:01.853) 0:00:48.472 ******* 2026-02-02 04:57:48.761580 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:57:48.761591 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:57:48.761602 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:57:48.761613 | orchestrator | 2026-02-02 04:57:48.761624 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-02 04:57:48.761637 | orchestrator | Monday 02 February 2026 04:56:16 +0000 (0:00:03.103) 0:00:51.577 ******* 2026-02-02 04:57:48.761672 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:57:48.761686 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:57:48.761698 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:57:48.761711 | orchestrator | 2026-02-02 04:57:48.761725 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-02 04:57:48.761737 | orchestrator | Monday 02 February 2026 04:56:18 +0000 (0:00:02.269) 0:00:53.846 ******* 2026-02-02 04:57:48.761749 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:57:48.761763 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:57:48.761775 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:57:48.761788 | orchestrator | 2026-02-02 04:57:48.761800 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-02 04:57:48.761812 | orchestrator | Monday 02 February 2026 04:56:20 +0000 (0:00:02.218) 0:00:56.065 ******* 2026-02-02 04:57:48.761825 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:57:48.761838 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:57:48.761850 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:57:48.761863 | orchestrator | 2026-02-02 04:57:48.761875 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-02 04:57:48.761887 | orchestrator | Monday 02 February 2026 04:56:22 +0000 (0:00:01.458) 0:00:57.524 ******* 2026-02-02 04:57:48.761900 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:57:48.761913 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:57:48.761925 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:57:48.761937 | orchestrator | 2026-02-02 04:57:48.761950 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-02 04:57:48.761962 | orchestrator | Monday 02 February 2026 04:56:24 +0000 (0:00:01.776) 0:00:59.301 ******* 2026-02-02 04:57:48.761974 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:57:48.761986 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:57:48.761998 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:57:48.762008 | orchestrator | 2026-02-02 04:57:48.762073 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-02 04:57:48.762085 | orchestrator | Monday 02 February 2026 04:56:26 +0000 (0:00:02.288) 0:01:01.590 ******* 2026-02-02 04:57:48.762096 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:57:48.762107 | orchestrator | 2026-02-02 04:57:48.762118 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-02 04:57:48.762129 | orchestrator | Monday 02 February 2026 04:56:28 +0000 (0:00:02.124) 0:01:03.714 ******* 2026-02-02 04:57:48.762139 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:57:48.762159 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:57:48.762185 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:57:48.762205 | orchestrator | 2026-02-02 04:57:48.762224 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-02 04:57:48.762242 | orchestrator | Monday 02 February 2026 04:56:31 +0000 (0:00:02.603) 0:01:06.318 ******* 2026-02-02 04:57:48.762260 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:57:48.762279 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:57:48.762291 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:57:48.762301 | orchestrator | 2026-02-02 04:57:48.762312 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-02 04:57:48.762323 | orchestrator | Monday 02 February 2026 04:56:33 +0000 (0:00:01.861) 0:01:08.179 ******* 2026-02-02 04:57:48.762334 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:57:48.762363 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:57:48.762402 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:57:48.762422 | orchestrator | 2026-02-02 04:57:48.762468 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-02 04:57:48.762486 | orchestrator | Monday 02 February 2026 04:56:34 +0000 (0:00:01.766) 0:01:09.945 ******* 2026-02-02 04:57:48.762503 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:57:48.762519 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:57:48.762535 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:57:48.762568 | orchestrator | 2026-02-02 04:57:48.762586 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-02 04:57:48.762603 | orchestrator | Monday 02 February 2026 04:56:37 +0000 (0:00:02.558) 0:01:12.504 ******* 2026-02-02 04:57:48.762622 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:57:48.762639 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:57:48.762684 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:57:48.762704 | orchestrator | 2026-02-02 04:57:48.762716 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-02 04:57:48.762727 | orchestrator | Monday 02 February 2026 04:56:38 +0000 (0:00:01.407) 0:01:13.911 ******* 2026-02-02 04:57:48.762738 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:57:48.762748 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:57:48.762759 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:57:48.762769 | orchestrator | 2026-02-02 04:57:48.762780 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-02 04:57:48.762791 | orchestrator | Monday 02 February 2026 04:56:40 +0000 (0:00:01.814) 0:01:15.726 ******* 2026-02-02 04:57:48.762802 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:57:48.762812 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:57:48.762823 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:57:48.762834 | orchestrator | 2026-02-02 04:57:48.762845 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-02 04:57:48.762855 | orchestrator | Monday 02 February 2026 04:56:42 +0000 (0:00:02.215) 0:01:17.942 ******* 2026-02-02 04:57:48.762866 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:57:48.762877 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:57:48.762887 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:57:48.762898 | orchestrator | 2026-02-02 04:57:48.762908 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-02 04:57:48.762919 | orchestrator | Monday 02 February 2026 04:56:44 +0000 (0:00:02.123) 0:01:20.065 ******* 2026-02-02 04:57:48.762930 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:57:48.762941 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:57:48.762951 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:57:48.762962 | orchestrator | 2026-02-02 04:57:48.762972 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-02 04:57:48.762983 | orchestrator | Monday 02 February 2026 04:56:46 +0000 (0:00:01.534) 0:01:21.600 ******* 2026-02-02 04:57:48.762994 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-02 04:57:48.763007 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-02 04:57:48.763018 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-02 04:57:48.763029 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-02 04:57:48.763040 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-02 04:57:48.763050 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-02 04:57:48.763061 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:57:48.763071 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:57:48.763082 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:57:48.763093 | orchestrator | 2026-02-02 04:57:48.763103 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-02 04:57:48.763114 | orchestrator | Monday 02 February 2026 04:57:09 +0000 (0:00:23.348) 0:01:44.948 ******* 2026-02-02 04:57:48.763125 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:57:48.763150 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:57:48.763182 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:57:48.763193 | orchestrator | 2026-02-02 04:57:48.763204 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-02 04:57:48.763215 | orchestrator | Monday 02 February 2026 04:57:11 +0000 (0:00:01.420) 0:01:46.369 ******* 2026-02-02 04:57:48.763225 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:57:48.763236 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:57:48.763247 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:57:48.763258 | orchestrator | 2026-02-02 04:57:48.763268 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-02 04:57:48.763279 | orchestrator | Monday 02 February 2026 04:57:13 +0000 (0:00:02.211) 0:01:48.581 ******* 2026-02-02 04:57:48.763290 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:57:48.763301 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:57:48.763311 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:57:48.763322 | orchestrator | 2026-02-02 04:57:48.763333 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-02 04:57:48.763344 | orchestrator | Monday 02 February 2026 04:57:15 +0000 (0:00:02.327) 0:01:50.908 ******* 2026-02-02 04:57:48.763354 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:57:48.763365 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:57:48.763376 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:57:48.763387 | orchestrator | 2026-02-02 04:57:48.763398 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-02 04:57:48.763408 | orchestrator | Monday 02 February 2026 04:57:43 +0000 (0:00:27.763) 0:02:18.672 ******* 2026-02-02 04:57:48.763419 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:57:48.763430 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:57:48.763458 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:57:48.763469 | orchestrator | 2026-02-02 04:57:48.763488 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-02 04:57:48.763499 | orchestrator | Monday 02 February 2026 04:57:45 +0000 (0:00:01.689) 0:02:20.362 ******* 2026-02-02 04:57:48.763510 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:57:48.763520 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:57:48.763531 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:57:48.763542 | orchestrator | 2026-02-02 04:57:48.763552 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-02 04:57:48.763563 | orchestrator | Monday 02 February 2026 04:57:46 +0000 (0:00:01.720) 0:02:22.083 ******* 2026-02-02 04:57:48.763574 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:57:48.763585 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:57:48.763599 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:57:48.763618 | orchestrator | 2026-02-02 04:57:48.763645 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-02 04:58:37.854739 | orchestrator | Monday 02 February 2026 04:57:48 +0000 (0:00:01.811) 0:02:23.894 ******* 2026-02-02 04:58:37.854856 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:58:37.854872 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:58:37.854884 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:58:37.854896 | orchestrator | 2026-02-02 04:58:37.854908 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-02 04:58:37.854920 | orchestrator | Monday 02 February 2026 04:57:50 +0000 (0:00:01.792) 0:02:25.687 ******* 2026-02-02 04:58:37.854931 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:58:37.854941 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:58:37.854952 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:58:37.854963 | orchestrator | 2026-02-02 04:58:37.854974 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-02 04:58:37.854985 | orchestrator | Monday 02 February 2026 04:57:51 +0000 (0:00:01.429) 0:02:27.116 ******* 2026-02-02 04:58:37.854997 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:58:37.855009 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:58:37.855020 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:58:37.855031 | orchestrator | 2026-02-02 04:58:37.855042 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-02 04:58:37.855077 | orchestrator | Monday 02 February 2026 04:57:53 +0000 (0:00:01.683) 0:02:28.800 ******* 2026-02-02 04:58:37.855104 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:58:37.855115 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:58:37.855126 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:58:37.855161 | orchestrator | 2026-02-02 04:58:37.855173 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-02 04:58:37.855184 | orchestrator | Monday 02 February 2026 04:57:55 +0000 (0:00:01.951) 0:02:30.751 ******* 2026-02-02 04:58:37.855195 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:58:37.855206 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:58:37.855216 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:58:37.855227 | orchestrator | 2026-02-02 04:58:37.855238 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-02 04:58:37.855249 | orchestrator | Monday 02 February 2026 04:57:57 +0000 (0:00:01.781) 0:02:32.533 ******* 2026-02-02 04:58:37.855260 | orchestrator | changed: [testbed-node-0] 2026-02-02 04:58:37.855271 | orchestrator | changed: [testbed-node-1] 2026-02-02 04:58:37.855282 | orchestrator | changed: [testbed-node-2] 2026-02-02 04:58:37.855292 | orchestrator | 2026-02-02 04:58:37.855303 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-02 04:58:37.855314 | orchestrator | Monday 02 February 2026 04:57:59 +0000 (0:00:01.849) 0:02:34.383 ******* 2026-02-02 04:58:37.855325 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:58:37.855335 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:58:37.855346 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:58:37.855357 | orchestrator | 2026-02-02 04:58:37.855368 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-02 04:58:37.855378 | orchestrator | Monday 02 February 2026 04:58:00 +0000 (0:00:01.323) 0:02:35.706 ******* 2026-02-02 04:58:37.855389 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:58:37.855400 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:58:37.855411 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:58:37.855450 | orchestrator | 2026-02-02 04:58:37.855462 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-02 04:58:37.855472 | orchestrator | Monday 02 February 2026 04:58:02 +0000 (0:00:01.794) 0:02:37.501 ******* 2026-02-02 04:58:37.855483 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:58:37.855494 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:58:37.855504 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:58:37.855515 | orchestrator | 2026-02-02 04:58:37.855526 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-02 04:58:37.855537 | orchestrator | Monday 02 February 2026 04:58:04 +0000 (0:00:01.780) 0:02:39.281 ******* 2026-02-02 04:58:37.855548 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:58:37.855558 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:58:37.855569 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:58:37.855579 | orchestrator | 2026-02-02 04:58:37.855591 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-02 04:58:37.855604 | orchestrator | Monday 02 February 2026 04:58:05 +0000 (0:00:01.628) 0:02:40.910 ******* 2026-02-02 04:58:37.855615 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-02 04:58:37.855626 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-02 04:58:37.855636 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-02 04:58:37.855647 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-02 04:58:37.855658 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-02 04:58:37.855668 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-02 04:58:37.855688 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-02 04:58:37.855699 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-02 04:58:37.855710 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-02 04:58:37.855721 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-02 04:58:37.855731 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-02 04:58:37.855742 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-02 04:58:37.855770 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-02 04:58:37.855782 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-02 04:58:37.855792 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-02 04:58:37.855803 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-02 04:58:37.855814 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-02 04:58:37.855824 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-02 04:58:37.855835 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-02 04:58:37.855845 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-02 04:58:37.855856 | orchestrator | 2026-02-02 04:58:37.855867 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-02 04:58:37.855877 | orchestrator | 2026-02-02 04:58:37.855888 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-02 04:58:37.855899 | orchestrator | Monday 02 February 2026 04:58:10 +0000 (0:00:04.509) 0:02:45.420 ******* 2026-02-02 04:58:37.855910 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:58:37.855921 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:58:37.855931 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:58:37.855942 | orchestrator | 2026-02-02 04:58:37.855953 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-02 04:58:37.855964 | orchestrator | Monday 02 February 2026 04:58:11 +0000 (0:00:01.418) 0:02:46.838 ******* 2026-02-02 04:58:37.855974 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:58:37.855985 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:58:37.855995 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:58:37.856006 | orchestrator | 2026-02-02 04:58:37.856017 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-02 04:58:37.856028 | orchestrator | Monday 02 February 2026 04:58:13 +0000 (0:00:01.647) 0:02:48.486 ******* 2026-02-02 04:58:37.856038 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:58:37.856049 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:58:37.856059 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:58:37.856070 | orchestrator | 2026-02-02 04:58:37.856081 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-02 04:58:37.856091 | orchestrator | Monday 02 February 2026 04:58:14 +0000 (0:00:01.656) 0:02:50.142 ******* 2026-02-02 04:58:37.856102 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 04:58:37.856113 | orchestrator | 2026-02-02 04:58:37.856124 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-02 04:58:37.856135 | orchestrator | Monday 02 February 2026 04:58:16 +0000 (0:00:01.801) 0:02:51.944 ******* 2026-02-02 04:58:37.856145 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:58:37.856156 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:58:37.856167 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:58:37.856184 | orchestrator | 2026-02-02 04:58:37.856196 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-02 04:58:37.856206 | orchestrator | Monday 02 February 2026 04:58:18 +0000 (0:00:01.346) 0:02:53.290 ******* 2026-02-02 04:58:37.856217 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:58:37.856228 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:58:37.856238 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:58:37.856249 | orchestrator | 2026-02-02 04:58:37.856260 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-02 04:58:37.856271 | orchestrator | Monday 02 February 2026 04:58:19 +0000 (0:00:01.719) 0:02:55.010 ******* 2026-02-02 04:58:37.856281 | orchestrator | skipping: [testbed-node-3] 2026-02-02 04:58:37.856292 | orchestrator | skipping: [testbed-node-4] 2026-02-02 04:58:37.856302 | orchestrator | skipping: [testbed-node-5] 2026-02-02 04:58:37.856313 | orchestrator | 2026-02-02 04:58:37.856324 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-02 04:58:37.856335 | orchestrator | Monday 02 February 2026 04:58:21 +0000 (0:00:01.482) 0:02:56.492 ******* 2026-02-02 04:58:37.856345 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:58:37.856356 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:58:37.856367 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:58:37.856378 | orchestrator | 2026-02-02 04:58:37.856388 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-02 04:58:37.856407 | orchestrator | Monday 02 February 2026 04:58:23 +0000 (0:00:01.841) 0:02:58.334 ******* 2026-02-02 04:58:37.856472 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:58:37.856486 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:58:37.856497 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:58:37.856508 | orchestrator | 2026-02-02 04:58:37.856518 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-02 04:58:37.856529 | orchestrator | Monday 02 February 2026 04:58:25 +0000 (0:00:02.289) 0:03:00.624 ******* 2026-02-02 04:58:37.856540 | orchestrator | ok: [testbed-node-3] 2026-02-02 04:58:37.856551 | orchestrator | ok: [testbed-node-4] 2026-02-02 04:58:37.856561 | orchestrator | ok: [testbed-node-5] 2026-02-02 04:58:37.856572 | orchestrator | 2026-02-02 04:58:37.856583 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-02 04:58:37.856593 | orchestrator | Monday 02 February 2026 04:58:27 +0000 (0:00:02.262) 0:03:02.886 ******* 2026-02-02 04:58:37.856604 | orchestrator | changed: [testbed-node-3] 2026-02-02 04:58:37.856615 | orchestrator | changed: [testbed-node-4] 2026-02-02 04:58:37.856625 | orchestrator | changed: [testbed-node-5] 2026-02-02 04:58:37.856636 | orchestrator | 2026-02-02 04:58:37.856647 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-02 04:58:37.856658 | orchestrator | 2026-02-02 04:58:37.856668 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-02 04:58:37.856679 | orchestrator | Monday 02 February 2026 04:58:35 +0000 (0:00:07.908) 0:03:10.795 ******* 2026-02-02 04:58:37.856690 | orchestrator | ok: [testbed-manager] 2026-02-02 04:58:37.856701 | orchestrator | 2026-02-02 04:58:37.856711 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-02 04:58:37.856729 | orchestrator | Monday 02 February 2026 04:58:37 +0000 (0:00:02.193) 0:03:12.989 ******* 2026-02-02 04:59:47.372185 | orchestrator | ok: [testbed-manager] 2026-02-02 04:59:47.372287 | orchestrator | 2026-02-02 04:59:47.372300 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-02 04:59:47.372310 | orchestrator | Monday 02 February 2026 04:58:39 +0000 (0:00:01.485) 0:03:14.474 ******* 2026-02-02 04:59:47.372319 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-02 04:59:47.372327 | orchestrator | 2026-02-02 04:59:47.372335 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-02 04:59:47.372343 | orchestrator | Monday 02 February 2026 04:58:40 +0000 (0:00:01.563) 0:03:16.038 ******* 2026-02-02 04:59:47.372352 | orchestrator | changed: [testbed-manager] 2026-02-02 04:59:47.372380 | orchestrator | 2026-02-02 04:59:47.372388 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-02 04:59:47.372496 | orchestrator | Monday 02 February 2026 04:58:42 +0000 (0:00:02.026) 0:03:18.065 ******* 2026-02-02 04:59:47.372507 | orchestrator | changed: [testbed-manager] 2026-02-02 04:59:47.372515 | orchestrator | 2026-02-02 04:59:47.372523 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-02 04:59:47.372543 | orchestrator | Monday 02 February 2026 04:58:44 +0000 (0:00:01.610) 0:03:19.676 ******* 2026-02-02 04:59:47.372552 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-02 04:59:47.372560 | orchestrator | 2026-02-02 04:59:47.372568 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-02 04:59:47.372575 | orchestrator | Monday 02 February 2026 04:58:47 +0000 (0:00:03.070) 0:03:22.747 ******* 2026-02-02 04:59:47.372583 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-02 04:59:47.372591 | orchestrator | 2026-02-02 04:59:47.372599 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-02 04:59:47.372607 | orchestrator | Monday 02 February 2026 04:58:49 +0000 (0:00:01.967) 0:03:24.715 ******* 2026-02-02 04:59:47.372614 | orchestrator | ok: [testbed-manager] 2026-02-02 04:59:47.372623 | orchestrator | 2026-02-02 04:59:47.372631 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-02 04:59:47.372638 | orchestrator | Monday 02 February 2026 04:58:51 +0000 (0:00:01.462) 0:03:26.177 ******* 2026-02-02 04:59:47.372646 | orchestrator | ok: [testbed-manager] 2026-02-02 04:59:47.372654 | orchestrator | 2026-02-02 04:59:47.372662 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-02 04:59:47.372670 | orchestrator | 2026-02-02 04:59:47.372678 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-02 04:59:47.372685 | orchestrator | Monday 02 February 2026 04:58:52 +0000 (0:00:01.538) 0:03:27.716 ******* 2026-02-02 04:59:47.372693 | orchestrator | ok: [testbed-manager] 2026-02-02 04:59:47.372701 | orchestrator | 2026-02-02 04:59:47.372709 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-02 04:59:47.372717 | orchestrator | Monday 02 February 2026 04:58:53 +0000 (0:00:01.138) 0:03:28.854 ******* 2026-02-02 04:59:47.372725 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 04:59:47.372734 | orchestrator | 2026-02-02 04:59:47.372743 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-02 04:59:47.372752 | orchestrator | Monday 02 February 2026 04:58:55 +0000 (0:00:01.432) 0:03:30.287 ******* 2026-02-02 04:59:47.372761 | orchestrator | ok: [testbed-manager] 2026-02-02 04:59:47.372770 | orchestrator | 2026-02-02 04:59:47.372779 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-02 04:59:47.372787 | orchestrator | Monday 02 February 2026 04:58:57 +0000 (0:00:01.889) 0:03:32.176 ******* 2026-02-02 04:59:47.372797 | orchestrator | ok: [testbed-manager] 2026-02-02 04:59:47.372806 | orchestrator | 2026-02-02 04:59:47.372815 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-02 04:59:47.372824 | orchestrator | Monday 02 February 2026 04:58:59 +0000 (0:00:02.713) 0:03:34.889 ******* 2026-02-02 04:59:47.372833 | orchestrator | ok: [testbed-manager] 2026-02-02 04:59:47.372842 | orchestrator | 2026-02-02 04:59:47.372851 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-02 04:59:47.372860 | orchestrator | Monday 02 February 2026 04:59:01 +0000 (0:00:01.450) 0:03:36.340 ******* 2026-02-02 04:59:47.372869 | orchestrator | ok: [testbed-manager] 2026-02-02 04:59:47.372878 | orchestrator | 2026-02-02 04:59:47.372888 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-02 04:59:47.372897 | orchestrator | Monday 02 February 2026 04:59:02 +0000 (0:00:01.505) 0:03:37.845 ******* 2026-02-02 04:59:47.372906 | orchestrator | ok: [testbed-manager] 2026-02-02 04:59:47.372915 | orchestrator | 2026-02-02 04:59:47.372924 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-02 04:59:47.372941 | orchestrator | Monday 02 February 2026 04:59:04 +0000 (0:00:01.576) 0:03:39.422 ******* 2026-02-02 04:59:47.372950 | orchestrator | ok: [testbed-manager] 2026-02-02 04:59:47.372959 | orchestrator | 2026-02-02 04:59:47.372968 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-02 04:59:47.372977 | orchestrator | Monday 02 February 2026 04:59:06 +0000 (0:00:02.539) 0:03:41.962 ******* 2026-02-02 04:59:47.372986 | orchestrator | ok: [testbed-manager] 2026-02-02 04:59:47.372995 | orchestrator | 2026-02-02 04:59:47.373004 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-02 04:59:47.373013 | orchestrator | 2026-02-02 04:59:47.373022 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-02 04:59:47.373031 | orchestrator | Monday 02 February 2026 04:59:08 +0000 (0:00:01.694) 0:03:43.656 ******* 2026-02-02 04:59:47.373039 | orchestrator | ok: [testbed-node-0] 2026-02-02 04:59:47.373047 | orchestrator | ok: [testbed-node-1] 2026-02-02 04:59:47.373055 | orchestrator | ok: [testbed-node-2] 2026-02-02 04:59:47.373062 | orchestrator | 2026-02-02 04:59:47.373070 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-02 04:59:47.373078 | orchestrator | Monday 02 February 2026 04:59:09 +0000 (0:00:01.362) 0:03:45.018 ******* 2026-02-02 04:59:47.373086 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:59:47.373094 | orchestrator | skipping: [testbed-node-1] 2026-02-02 04:59:47.373102 | orchestrator | skipping: [testbed-node-2] 2026-02-02 04:59:47.373109 | orchestrator | 2026-02-02 04:59:47.373133 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-02 04:59:47.373141 | orchestrator | Monday 02 February 2026 04:59:11 +0000 (0:00:01.595) 0:03:46.614 ******* 2026-02-02 04:59:47.373149 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 04:59:47.373157 | orchestrator | 2026-02-02 04:59:47.373165 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-02 04:59:47.373173 | orchestrator | Monday 02 February 2026 04:59:13 +0000 (0:00:01.722) 0:03:48.337 ******* 2026-02-02 04:59:47.373181 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-02 04:59:47.373188 | orchestrator | 2026-02-02 04:59:47.373196 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-02 04:59:47.373204 | orchestrator | Monday 02 February 2026 04:59:15 +0000 (0:00:01.901) 0:03:50.238 ******* 2026-02-02 04:59:47.373212 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:59:47.373220 | orchestrator | 2026-02-02 04:59:47.373228 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-02 04:59:47.373236 | orchestrator | Monday 02 February 2026 04:59:17 +0000 (0:00:01.918) 0:03:52.157 ******* 2026-02-02 04:59:47.373244 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:59:47.373251 | orchestrator | 2026-02-02 04:59:47.373259 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-02 04:59:47.373267 | orchestrator | Monday 02 February 2026 04:59:18 +0000 (0:00:01.166) 0:03:53.324 ******* 2026-02-02 04:59:47.373275 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:59:47.373283 | orchestrator | 2026-02-02 04:59:47.373290 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-02 04:59:47.373298 | orchestrator | Monday 02 February 2026 04:59:20 +0000 (0:00:01.967) 0:03:55.292 ******* 2026-02-02 04:59:47.373306 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:59:47.373314 | orchestrator | 2026-02-02 04:59:47.373322 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-02 04:59:47.373329 | orchestrator | Monday 02 February 2026 04:59:22 +0000 (0:00:02.304) 0:03:57.597 ******* 2026-02-02 04:59:47.373337 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:59:47.373345 | orchestrator | 2026-02-02 04:59:47.373353 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-02 04:59:47.373361 | orchestrator | Monday 02 February 2026 04:59:23 +0000 (0:00:01.133) 0:03:58.731 ******* 2026-02-02 04:59:47.373374 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:59:47.373382 | orchestrator | 2026-02-02 04:59:47.373390 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-02 04:59:47.373416 | orchestrator | Monday 02 February 2026 04:59:24 +0000 (0:00:01.214) 0:03:59.945 ******* 2026-02-02 04:59:47.373425 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-02 04:59:47.373433 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-02 04:59:47.373441 | orchestrator | } 2026-02-02 04:59:47.373449 | orchestrator | 2026-02-02 04:59:47.373457 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-02 04:59:47.373465 | orchestrator | Monday 02 February 2026 04:59:25 +0000 (0:00:01.153) 0:04:01.099 ******* 2026-02-02 04:59:47.373473 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:59:47.373480 | orchestrator | 2026-02-02 04:59:47.373488 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-02 04:59:47.373496 | orchestrator | Monday 02 February 2026 04:59:27 +0000 (0:00:01.196) 0:04:02.295 ******* 2026-02-02 04:59:47.373503 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-02 04:59:47.373511 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-02 04:59:47.373519 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-02 04:59:47.373527 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-02 04:59:47.373534 | orchestrator | 2026-02-02 04:59:47.373542 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-02 04:59:47.373550 | orchestrator | Monday 02 February 2026 04:59:32 +0000 (0:00:05.567) 0:04:07.863 ******* 2026-02-02 04:59:47.373557 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-02 04:59:47.373565 | orchestrator | 2026-02-02 04:59:47.373573 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-02 04:59:47.373581 | orchestrator | Monday 02 February 2026 04:59:35 +0000 (0:00:02.405) 0:04:10.268 ******* 2026-02-02 04:59:47.373588 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-02 04:59:47.373596 | orchestrator | 2026-02-02 04:59:47.373604 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-02 04:59:47.373612 | orchestrator | Monday 02 February 2026 04:59:37 +0000 (0:00:02.601) 0:04:12.870 ******* 2026-02-02 04:59:47.373619 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-02 04:59:47.373627 | orchestrator | 2026-02-02 04:59:47.373635 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-02 04:59:47.373650 | orchestrator | Monday 02 February 2026 04:59:41 +0000 (0:00:04.182) 0:04:17.052 ******* 2026-02-02 04:59:47.373658 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:59:47.373666 | orchestrator | 2026-02-02 04:59:47.373674 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-02 04:59:47.373681 | orchestrator | Monday 02 February 2026 04:59:43 +0000 (0:00:01.137) 0:04:18.190 ******* 2026-02-02 04:59:47.373689 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-02 04:59:47.373696 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-02 04:59:47.373704 | orchestrator | 2026-02-02 04:59:47.373712 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-02 04:59:47.373720 | orchestrator | Monday 02 February 2026 04:59:45 +0000 (0:00:02.935) 0:04:21.126 ******* 2026-02-02 04:59:47.373727 | orchestrator | skipping: [testbed-node-0] 2026-02-02 04:59:47.373741 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:00:14.064849 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:00:14.064940 | orchestrator | 2026-02-02 05:00:14.064951 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-02 05:00:14.064959 | orchestrator | Monday 02 February 2026 04:59:47 +0000 (0:00:01.386) 0:04:22.513 ******* 2026-02-02 05:00:14.064983 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:00:14.064991 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:00:14.064997 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:00:14.065003 | orchestrator | 2026-02-02 05:00:14.065010 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-02 05:00:14.065016 | orchestrator | 2026-02-02 05:00:14.065023 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-02 05:00:14.065029 | orchestrator | Monday 02 February 2026 04:59:49 +0000 (0:00:02.084) 0:04:24.597 ******* 2026-02-02 05:00:14.065035 | orchestrator | ok: [testbed-manager] 2026-02-02 05:00:14.065041 | orchestrator | 2026-02-02 05:00:14.065048 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-02 05:00:14.065054 | orchestrator | Monday 02 February 2026 04:59:50 +0000 (0:00:01.139) 0:04:25.736 ******* 2026-02-02 05:00:14.065072 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-02 05:00:14.065079 | orchestrator | 2026-02-02 05:00:14.065085 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-02 05:00:14.065091 | orchestrator | Monday 02 February 2026 04:59:52 +0000 (0:00:01.439) 0:04:27.176 ******* 2026-02-02 05:00:14.065097 | orchestrator | ok: [testbed-manager] 2026-02-02 05:00:14.065104 | orchestrator | 2026-02-02 05:00:14.065110 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-02 05:00:14.065116 | orchestrator | 2026-02-02 05:00:14.065122 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-02 05:00:14.065128 | orchestrator | Monday 02 February 2026 04:59:57 +0000 (0:00:05.178) 0:04:32.355 ******* 2026-02-02 05:00:14.065134 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:00:14.065140 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:00:14.065147 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:00:14.065153 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:00:14.065159 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:00:14.065165 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:00:14.065171 | orchestrator | 2026-02-02 05:00:14.065177 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-02 05:00:14.065183 | orchestrator | Monday 02 February 2026 04:59:59 +0000 (0:00:02.139) 0:04:34.494 ******* 2026-02-02 05:00:14.065189 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-02 05:00:14.065196 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-02 05:00:14.065202 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-02 05:00:14.065208 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-02 05:00:14.065214 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-02 05:00:14.065220 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-02 05:00:14.065227 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-02 05:00:14.065233 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-02 05:00:14.065239 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-02 05:00:14.065245 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-02 05:00:14.065251 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-02 05:00:14.065330 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-02 05:00:14.065338 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-02 05:00:14.065344 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-02 05:00:14.065350 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-02 05:00:14.065363 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-02 05:00:14.065369 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-02 05:00:14.065375 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-02 05:00:14.065381 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-02 05:00:14.065409 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-02 05:00:14.065422 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-02 05:00:14.065434 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-02 05:00:14.065445 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-02 05:00:14.065458 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-02 05:00:14.065470 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-02 05:00:14.065480 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-02 05:00:14.065502 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-02 05:00:14.065511 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-02 05:00:14.065519 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-02 05:00:14.065527 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-02 05:00:14.065559 | orchestrator | 2026-02-02 05:00:14.065566 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-02 05:00:14.065574 | orchestrator | Monday 02 February 2026 05:00:09 +0000 (0:00:10.285) 0:04:44.780 ******* 2026-02-02 05:00:14.065581 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:00:14.065590 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:00:14.065598 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:00:14.065606 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:00:14.065613 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:00:14.065621 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:00:14.065629 | orchestrator | 2026-02-02 05:00:14.065636 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-02 05:00:14.065649 | orchestrator | Monday 02 February 2026 05:00:11 +0000 (0:00:01.746) 0:04:46.526 ******* 2026-02-02 05:00:14.065657 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:00:14.065665 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:00:14.065672 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:00:14.065680 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:00:14.065687 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:00:14.065693 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:00:14.065699 | orchestrator | 2026-02-02 05:00:14.065705 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 05:00:14.065711 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 05:00:14.065720 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-02 05:00:14.065727 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-02 05:00:14.065733 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-02 05:00:14.065739 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-02 05:00:14.065751 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-02 05:00:14.065757 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-02 05:00:14.065763 | orchestrator | 2026-02-02 05:00:14.065770 | orchestrator | 2026-02-02 05:00:14.065776 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 05:00:14.065782 | orchestrator | Monday 02 February 2026 05:00:14 +0000 (0:00:02.664) 0:04:49.190 ******* 2026-02-02 05:00:14.065788 | orchestrator | =============================================================================== 2026-02-02 05:00:14.065795 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.76s 2026-02-02 05:00:14.065801 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.35s 2026-02-02 05:00:14.065810 | orchestrator | Manage labels ---------------------------------------------------------- 10.29s 2026-02-02 05:00:14.065817 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 7.91s 2026-02-02 05:00:14.065824 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.57s 2026-02-02 05:00:14.065831 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.18s 2026-02-02 05:00:14.065839 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 5.04s 2026-02-02 05:00:14.065846 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.51s 2026-02-02 05:00:14.065853 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.18s 2026-02-02 05:00:14.065860 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 3.17s 2026-02-02 05:00:14.065867 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 3.10s 2026-02-02 05:00:14.065874 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.07s 2026-02-02 05:00:14.065881 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.94s 2026-02-02 05:00:14.065889 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.92s 2026-02-02 05:00:14.065896 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.76s 2026-02-02 05:00:14.065903 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.71s 2026-02-02 05:00:14.065910 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.69s 2026-02-02 05:00:14.065917 | orchestrator | Manage taints ----------------------------------------------------------- 2.66s 2026-02-02 05:00:14.065930 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.60s 2026-02-02 05:00:14.581356 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.60s 2026-02-02 05:00:14.927466 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-02 05:00:14.927584 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-02 05:00:14.939324 | orchestrator | + set -e 2026-02-02 05:00:14.939422 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 05:00:14.939437 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 05:00:14.939450 | orchestrator | ++ INTERACTIVE=false 2026-02-02 05:00:14.939461 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 05:00:14.939472 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 05:00:14.939494 | orchestrator | + osism apply openstackclient 2026-02-02 05:00:27.050309 | orchestrator | 2026-02-02 05:00:27 | INFO  | Task 5bb9c9f1-dbc2-4f32-81b6-22904ab76433 (openstackclient) was prepared for execution. 2026-02-02 05:00:27.050497 | orchestrator | 2026-02-02 05:00:27 | INFO  | It takes a moment until task 5bb9c9f1-dbc2-4f32-81b6-22904ab76433 (openstackclient) has been started and output is visible here. 2026-02-02 05:00:52.268148 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-02 05:00:52.268261 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-02 05:00:52.268281 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-02 05:00:52.268290 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-02 05:00:52.268310 | orchestrator | 2026-02-02 05:00:52.269130 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-02 05:00:52.269146 | orchestrator | 2026-02-02 05:00:52.269154 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-02 05:00:52.269161 | orchestrator | Monday 02 February 2026 05:00:33 +0000 (0:00:01.676) 0:00:01.676 ******* 2026-02-02 05:00:52.269168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-02 05:00:52.269175 | orchestrator | 2026-02-02 05:00:52.269181 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-02 05:00:52.269187 | orchestrator | Monday 02 February 2026 05:00:34 +0000 (0:00:00.853) 0:00:02.530 ******* 2026-02-02 05:00:52.269192 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-02 05:00:52.269198 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-02 05:00:52.269203 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-02 05:00:52.269209 | orchestrator | 2026-02-02 05:00:52.269214 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-02 05:00:52.269220 | orchestrator | Monday 02 February 2026 05:00:35 +0000 (0:00:01.376) 0:00:03.907 ******* 2026-02-02 05:00:52.269226 | orchestrator | changed: [testbed-manager] 2026-02-02 05:00:52.269231 | orchestrator | 2026-02-02 05:00:52.269237 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-02 05:00:52.269242 | orchestrator | Monday 02 February 2026 05:00:36 +0000 (0:00:01.322) 0:00:05.229 ******* 2026-02-02 05:00:52.269248 | orchestrator | ok: [testbed-manager] 2026-02-02 05:00:52.269255 | orchestrator | 2026-02-02 05:00:52.269261 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-02 05:00:52.269266 | orchestrator | Monday 02 February 2026 05:00:38 +0000 (0:00:01.115) 0:00:06.344 ******* 2026-02-02 05:00:52.269271 | orchestrator | ok: [testbed-manager] 2026-02-02 05:00:52.269277 | orchestrator | 2026-02-02 05:00:52.269282 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-02 05:00:52.269288 | orchestrator | Monday 02 February 2026 05:00:38 +0000 (0:00:00.879) 0:00:07.223 ******* 2026-02-02 05:00:52.269293 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-02 05:00:52.269299 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-02 05:00:52.269309 | orchestrator | ok: [testbed-manager] 2026-02-02 05:00:52.269315 | orchestrator | 2026-02-02 05:00:52.269320 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-02 05:00:52.269326 | orchestrator | Monday 02 February 2026 05:00:39 +0000 (0:00:00.745) 0:00:07.969 ******* 2026-02-02 05:00:52.269331 | orchestrator | changed: [testbed-manager] 2026-02-02 05:00:52.269337 | orchestrator | 2026-02-02 05:00:52.269342 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-02 05:00:52.269347 | orchestrator | Monday 02 February 2026 05:00:48 +0000 (0:00:09.263) 0:00:17.232 ******* 2026-02-02 05:00:52.269353 | orchestrator | changed: [testbed-manager] 2026-02-02 05:00:52.269358 | orchestrator | 2026-02-02 05:00:52.269420 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-02 05:00:52.269427 | orchestrator | Monday 02 February 2026 05:00:50 +0000 (0:00:01.332) 0:00:18.565 ******* 2026-02-02 05:00:52.269433 | orchestrator | changed: [testbed-manager] 2026-02-02 05:00:52.269438 | orchestrator | 2026-02-02 05:00:52.269443 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-02 05:00:52.269449 | orchestrator | Monday 02 February 2026 05:00:50 +0000 (0:00:00.578) 0:00:19.143 ******* 2026-02-02 05:00:52.269454 | orchestrator | ok: [testbed-manager] 2026-02-02 05:00:52.269460 | orchestrator | 2026-02-02 05:00:52.269465 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 05:00:52.269470 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-02 05:00:52.269477 | orchestrator | 2026-02-02 05:00:52.269482 | orchestrator | 2026-02-02 05:00:52.269488 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 05:00:52.269493 | orchestrator | Monday 02 February 2026 05:00:51 +0000 (0:00:01.112) 0:00:20.256 ******* 2026-02-02 05:00:52.269498 | orchestrator | =============================================================================== 2026-02-02 05:00:52.269504 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 9.26s 2026-02-02 05:00:52.269509 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.38s 2026-02-02 05:00:52.269515 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.33s 2026-02-02 05:00:52.269520 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.32s 2026-02-02 05:00:52.269525 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 1.12s 2026-02-02 05:00:52.269531 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.11s 2026-02-02 05:00:52.269552 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.88s 2026-02-02 05:00:52.269564 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.85s 2026-02-02 05:00:52.269569 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.75s 2026-02-02 05:00:52.269575 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.58s 2026-02-02 05:00:52.592090 | orchestrator | + osism apply -a upgrade common 2026-02-02 05:00:54.702299 | orchestrator | 2026-02-02 05:00:54 | INFO  | Task e95bcad3-4cd6-4639-bc69-59753d041cd8 (common) was prepared for execution. 2026-02-02 05:00:54.702423 | orchestrator | 2026-02-02 05:00:54 | INFO  | It takes a moment until task e95bcad3-4cd6-4639-bc69-59753d041cd8 (common) has been started and output is visible here. 2026-02-02 05:01:14.493664 | orchestrator | 2026-02-02 05:01:14.493803 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-02 05:01:14.493823 | orchestrator | 2026-02-02 05:01:14.493836 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-02 05:01:14.493848 | orchestrator | Monday 02 February 2026 05:01:01 +0000 (0:00:02.362) 0:00:02.362 ******* 2026-02-02 05:01:14.493860 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 05:01:14.493873 | orchestrator | 2026-02-02 05:01:14.493884 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-02 05:01:14.493895 | orchestrator | Monday 02 February 2026 05:01:05 +0000 (0:00:03.831) 0:00:06.193 ******* 2026-02-02 05:01:14.493907 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:01:14.493918 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:01:14.493930 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:01:14.493941 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:01:14.493978 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:01:14.493990 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:01:14.494001 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:01:14.494012 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:01:14.494187 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:01:14.494210 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:01:14.494229 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:01:14.494247 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:01:14.494266 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:01:14.494285 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:01:14.494305 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:01:14.494324 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:01:14.494337 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:01:14.494349 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:01:14.494362 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:01:14.494401 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:01:14.494415 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:01:14.494436 | orchestrator | 2026-02-02 05:01:14.494456 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-02 05:01:14.494476 | orchestrator | Monday 02 February 2026 05:01:08 +0000 (0:00:03.774) 0:00:09.968 ******* 2026-02-02 05:01:14.494497 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 05:01:14.494520 | orchestrator | 2026-02-02 05:01:14.494543 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-02 05:01:14.494563 | orchestrator | Monday 02 February 2026 05:01:11 +0000 (0:00:02.975) 0:00:12.943 ******* 2026-02-02 05:01:14.494589 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:14.494627 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:14.494671 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:14.494697 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:14.494708 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:14.494719 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:14.494940 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:14.494970 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:14.494983 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:14.495016 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:17.593803 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:17.593886 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:17.593908 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:17.593917 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:17.593932 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:17.593942 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:17.593950 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:17.593992 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:17.594000 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:17.594008 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:17.594047 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:17.594057 | orchestrator | 2026-02-02 05:01:17.594066 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-02 05:01:17.594074 | orchestrator | Monday 02 February 2026 05:01:16 +0000 (0:00:04.548) 0:00:17.491 ******* 2026-02-02 05:01:17.594088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:17.594097 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:17.594104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:17.594121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:17.594142 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:01:17.594163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:20.048655 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:20.048764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:20.048780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:20.048839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:20.048852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:20.048885 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:20.048899 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:01:20.048912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:20.048922 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:01:20.048951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:20.048962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:20.048972 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:01:20.048982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:20.048992 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:01:20.049006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:20.049017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:20.049027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:20.049044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:20.049053 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:01:20.049070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:23.894590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:23.894693 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:01:23.894728 | orchestrator | 2026-02-02 05:01:23.894754 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-02 05:01:23.894767 | orchestrator | Monday 02 February 2026 05:01:20 +0000 (0:00:03.579) 0:00:21.071 ******* 2026-02-02 05:01:23.894781 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:23.894817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:23.894837 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:23.894884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:23.894906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:23.894926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:23.894976 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:23.894990 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:01:23.895001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:23.895013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:23.895025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:23.895045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:23.895056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:23.895068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:23.895079 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:01:23.895090 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:01:23.895100 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:01:23.895127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:37.793215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:37.793329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:37.793347 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:01:37.793508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:37.793547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:37.793559 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:01:37.793571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:01:37.793583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:37.793594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:37.793607 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:01:37.793618 | orchestrator | 2026-02-02 05:01:37.793631 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-02 05:01:37.793643 | orchestrator | Monday 02 February 2026 05:01:23 +0000 (0:00:03.852) 0:00:24.923 ******* 2026-02-02 05:01:37.793654 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:01:37.793665 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:01:37.793676 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:01:37.793687 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:01:37.793716 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:01:37.793727 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:01:37.793738 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:01:37.793749 | orchestrator | 2026-02-02 05:01:37.793761 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-02 05:01:37.793775 | orchestrator | Monday 02 February 2026 05:01:26 +0000 (0:00:02.576) 0:00:27.500 ******* 2026-02-02 05:01:37.793788 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:01:37.793801 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:01:37.793814 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:01:37.793826 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:01:37.793839 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:01:37.793851 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:01:37.793864 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:01:37.793876 | orchestrator | 2026-02-02 05:01:37.793900 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-02 05:01:37.793913 | orchestrator | Monday 02 February 2026 05:01:28 +0000 (0:00:02.220) 0:00:29.720 ******* 2026-02-02 05:01:37.793926 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:01:37.793939 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:01:37.793952 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:01:37.793965 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:01:37.793978 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:01:37.793990 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:01:37.794002 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:01:37.794086 | orchestrator | 2026-02-02 05:01:37.794101 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-02 05:01:37.794114 | orchestrator | Monday 02 February 2026 05:01:31 +0000 (0:00:02.543) 0:00:32.263 ******* 2026-02-02 05:01:37.794134 | orchestrator | changed: [testbed-manager] 2026-02-02 05:01:37.794148 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:01:37.794167 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:01:37.794186 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:01:37.794207 | orchestrator | changed: [testbed-node-3] 2026-02-02 05:01:37.794226 | orchestrator | changed: [testbed-node-4] 2026-02-02 05:01:37.794243 | orchestrator | changed: [testbed-node-5] 2026-02-02 05:01:37.794254 | orchestrator | 2026-02-02 05:01:37.794265 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-02 05:01:37.794276 | orchestrator | Monday 02 February 2026 05:01:34 +0000 (0:00:03.219) 0:00:35.483 ******* 2026-02-02 05:01:37.794290 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:37.794311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:37.794330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:37.794343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:37.794365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:39.876181 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:39.876291 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:39.876305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:39.876315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:39.876325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:39.876334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:39.876411 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:39.876424 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:39.876436 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:39.876445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:39.876455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:39.876464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:39.876474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:39.876489 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:39.876505 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:39.876521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:59.660002 | orchestrator | 2026-02-02 05:01:59.660116 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-02 05:01:59.660132 | orchestrator | Monday 02 February 2026 05:01:39 +0000 (0:00:05.421) 0:00:40.905 ******* 2026-02-02 05:01:59.660144 | orchestrator | [WARNING]: Skipped 2026-02-02 05:01:59.660157 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-02 05:01:59.660168 | orchestrator | to this access issue: 2026-02-02 05:01:59.660180 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-02 05:01:59.660191 | orchestrator | directory 2026-02-02 05:01:59.660202 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 05:01:59.660214 | orchestrator | 2026-02-02 05:01:59.660225 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-02 05:01:59.660236 | orchestrator | Monday 02 February 2026 05:01:42 +0000 (0:00:02.492) 0:00:43.397 ******* 2026-02-02 05:01:59.660246 | orchestrator | [WARNING]: Skipped 2026-02-02 05:01:59.660273 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-02 05:01:59.660284 | orchestrator | to this access issue: 2026-02-02 05:01:59.660294 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-02 05:01:59.660305 | orchestrator | directory 2026-02-02 05:01:59.660316 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 05:01:59.660327 | orchestrator | 2026-02-02 05:01:59.660339 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-02 05:01:59.660350 | orchestrator | Monday 02 February 2026 05:01:44 +0000 (0:00:01.877) 0:00:45.275 ******* 2026-02-02 05:01:59.660360 | orchestrator | [WARNING]: Skipped 2026-02-02 05:01:59.660444 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-02 05:01:59.660462 | orchestrator | to this access issue: 2026-02-02 05:01:59.660480 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-02 05:01:59.660496 | orchestrator | directory 2026-02-02 05:01:59.660515 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 05:01:59.660535 | orchestrator | 2026-02-02 05:01:59.660554 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-02 05:01:59.660574 | orchestrator | Monday 02 February 2026 05:01:46 +0000 (0:00:01.961) 0:00:47.236 ******* 2026-02-02 05:01:59.660591 | orchestrator | [WARNING]: Skipped 2026-02-02 05:01:59.660604 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-02 05:01:59.660616 | orchestrator | to this access issue: 2026-02-02 05:01:59.660629 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-02 05:01:59.660642 | orchestrator | directory 2026-02-02 05:01:59.660655 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 05:01:59.660692 | orchestrator | 2026-02-02 05:01:59.660706 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-02 05:01:59.660718 | orchestrator | Monday 02 February 2026 05:01:48 +0000 (0:00:01.857) 0:00:49.094 ******* 2026-02-02 05:01:59.660731 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:01:59.660744 | orchestrator | changed: [testbed-manager] 2026-02-02 05:01:59.660757 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:01:59.660767 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:01:59.660778 | orchestrator | changed: [testbed-node-3] 2026-02-02 05:01:59.660789 | orchestrator | changed: [testbed-node-4] 2026-02-02 05:01:59.660799 | orchestrator | changed: [testbed-node-5] 2026-02-02 05:01:59.660810 | orchestrator | 2026-02-02 05:01:59.660820 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-02 05:01:59.660831 | orchestrator | Monday 02 February 2026 05:01:52 +0000 (0:00:04.069) 0:00:53.163 ******* 2026-02-02 05:01:59.660842 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:01:59.660854 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:01:59.660865 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:01:59.660876 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:01:59.660886 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:01:59.660897 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:01:59.660908 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:01:59.660918 | orchestrator | 2026-02-02 05:01:59.660929 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-02 05:01:59.660940 | orchestrator | Monday 02 February 2026 05:01:55 +0000 (0:00:03.026) 0:00:56.190 ******* 2026-02-02 05:01:59.660951 | orchestrator | ok: [testbed-manager] 2026-02-02 05:01:59.660962 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:01:59.660972 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:01:59.660983 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:01:59.660993 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:01:59.661004 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:01:59.661014 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:01:59.661025 | orchestrator | 2026-02-02 05:01:59.661036 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-02 05:01:59.661046 | orchestrator | Monday 02 February 2026 05:01:57 +0000 (0:00:02.763) 0:00:58.953 ******* 2026-02-02 05:01:59.661078 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:59.661102 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:59.661115 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:59.661134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:59.661148 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:59.661161 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:01:59.661172 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:01:59.661184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:01:59.661204 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:09.909473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:09.909641 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:09.909671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:09.909692 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:09.909713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:09.909736 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:09.909760 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:09.909801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:09.909822 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:09.909834 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:09.909846 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:09.909857 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:09.909868 | orchestrator | 2026-02-02 05:02:09.909881 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-02 05:02:09.909894 | orchestrator | Monday 02 February 2026 05:02:00 +0000 (0:00:02.939) 0:01:01.894 ******* 2026-02-02 05:02:09.909905 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:02:09.909916 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:02:09.909927 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:02:09.909938 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:02:09.909948 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:02:09.909959 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:02:09.909970 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:02:09.909981 | orchestrator | 2026-02-02 05:02:09.909991 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-02 05:02:09.910002 | orchestrator | Monday 02 February 2026 05:02:03 +0000 (0:00:03.061) 0:01:04.956 ******* 2026-02-02 05:02:09.910069 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:02:09.910095 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:02:09.910107 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:02:09.910118 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:02:09.910136 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:02:09.910146 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:02:09.910157 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:02:09.910168 | orchestrator | 2026-02-02 05:02:09.910178 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-02 05:02:09.910189 | orchestrator | Monday 02 February 2026 05:02:07 +0000 (0:00:03.432) 0:01:08.389 ******* 2026-02-02 05:02:09.910225 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:11.895339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:11.895507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:11.895523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:11.895535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:11.895547 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:11.895558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:11.895594 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:11.895641 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:11.895654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:11.895670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:11.895682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:11.895693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:11.895712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:11.895725 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:11.895751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:14.844168 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:14.844271 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:14.844303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:14.844317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:14.844340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:14.844441 | orchestrator | 2026-02-02 05:02:14.844459 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-02 05:02:14.844471 | orchestrator | Monday 02 February 2026 05:02:11 +0000 (0:00:04.532) 0:01:12.924 ******* 2026-02-02 05:02:14.844483 | orchestrator | changed: [testbed-manager] => { 2026-02-02 05:02:14.844495 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:02:14.844506 | orchestrator | } 2026-02-02 05:02:14.844517 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 05:02:14.844528 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:02:14.844539 | orchestrator | } 2026-02-02 05:02:14.844549 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 05:02:14.844560 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:02:14.844571 | orchestrator | } 2026-02-02 05:02:14.844581 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 05:02:14.844592 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:02:14.844603 | orchestrator | } 2026-02-02 05:02:14.844614 | orchestrator | changed: [testbed-node-3] => { 2026-02-02 05:02:14.844624 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:02:14.844635 | orchestrator | } 2026-02-02 05:02:14.844646 | orchestrator | changed: [testbed-node-4] => { 2026-02-02 05:02:14.844657 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:02:14.844667 | orchestrator | } 2026-02-02 05:02:14.844678 | orchestrator | changed: [testbed-node-5] => { 2026-02-02 05:02:14.844689 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:02:14.844700 | orchestrator | } 2026-02-02 05:02:14.844711 | orchestrator | 2026-02-02 05:02:14.844722 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 05:02:14.844733 | orchestrator | Monday 02 February 2026 05:02:14 +0000 (0:00:02.218) 0:01:15.143 ******* 2026-02-02 05:02:14.844761 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:14.844795 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:14.844808 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:14.844819 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:02:14.844831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:14.844850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:14.844861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:14.844873 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:02:14.844883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:14.844895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:14.844906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:14.844925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:24.509902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:24.510078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:24.510106 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:02:24.510121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:24.510136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:24.510150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:24.510161 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:02:24.510173 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:02:24.510201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:24.510214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:24.510250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:24.510275 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:02:24.510287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:24.510299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:24.510311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:24.510324 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:02:24.510336 | orchestrator | 2026-02-02 05:02:24.510351 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:02:24.510394 | orchestrator | Monday 02 February 2026 05:02:17 +0000 (0:00:03.211) 0:01:18.355 ******* 2026-02-02 05:02:24.510405 | orchestrator | 2026-02-02 05:02:24.510418 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:02:24.510430 | orchestrator | Monday 02 February 2026 05:02:17 +0000 (0:00:00.446) 0:01:18.802 ******* 2026-02-02 05:02:24.510443 | orchestrator | 2026-02-02 05:02:24.510456 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:02:24.510468 | orchestrator | Monday 02 February 2026 05:02:18 +0000 (0:00:00.569) 0:01:19.372 ******* 2026-02-02 05:02:24.510480 | orchestrator | 2026-02-02 05:02:24.510492 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:02:24.510505 | orchestrator | Monday 02 February 2026 05:02:18 +0000 (0:00:00.495) 0:01:19.867 ******* 2026-02-02 05:02:24.510519 | orchestrator | 2026-02-02 05:02:24.510531 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:02:24.510543 | orchestrator | Monday 02 February 2026 05:02:19 +0000 (0:00:00.452) 0:01:20.319 ******* 2026-02-02 05:02:24.510555 | orchestrator | 2026-02-02 05:02:24.510569 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:02:24.510582 | orchestrator | Monday 02 February 2026 05:02:20 +0000 (0:00:00.779) 0:01:21.099 ******* 2026-02-02 05:02:24.510596 | orchestrator | 2026-02-02 05:02:24.510608 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:02:24.510622 | orchestrator | Monday 02 February 2026 05:02:20 +0000 (0:00:00.477) 0:01:21.577 ******* 2026-02-02 05:02:24.510636 | orchestrator | 2026-02-02 05:02:24.510650 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-02 05:02:24.510662 | orchestrator | Monday 02 February 2026 05:02:21 +0000 (0:00:00.880) 0:01:22.458 ******* 2026-02-02 05:02:24.510706 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ptax35xl/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ptax35xl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ptax35xl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-02 05:02:27.873829 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_kcwsmgsa/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_kcwsmgsa/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_kcwsmgsa/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-02 05:02:27.874012 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_vsuu3o48/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_vsuu3o48/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_vsuu3o48/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-02 05:02:27.874107 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_po52x117/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_po52x117/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_po52x117/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-02 05:02:27.874143 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_2qqgd0p4/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_2qqgd0p4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_2qqgd0p4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-02 05:02:28.637511 | orchestrator | 2026-02-02 05:02:28 | INFO  | Task 349163b0-38ca-4648-89f3-5557cd8e8c46 (common) was prepared for execution. 2026-02-02 05:02:28.637636 | orchestrator | 2026-02-02 05:02:28 | INFO  | It takes a moment until task 349163b0-38ca-4648-89f3-5557cd8e8c46 (common) has been started and output is visible here. 2026-02-02 05:02:34.704542 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_qs3d2ke3/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_qs3d2ke3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_qs3d2ke3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-02 05:02:34.704700 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_1wptx_d3/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_1wptx_d3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_1wptx_d3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-02 05:02:34.704734 | orchestrator | 2026-02-02 05:02:34.704749 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 05:02:34.704763 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-02 05:02:34.704776 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-02 05:02:34.704787 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-02 05:02:34.704798 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-02 05:02:34.704808 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-02 05:02:34.704819 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-02 05:02:34.704830 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-02 05:02:34.704841 | orchestrator | 2026-02-02 05:02:34.704852 | orchestrator | 2026-02-02 05:02:34.704863 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 05:02:34.704875 | orchestrator | Monday 02 February 2026 05:02:27 +0000 (0:00:06.455) 0:01:28.914 ******* 2026-02-02 05:02:34.704885 | orchestrator | =============================================================================== 2026-02-02 05:02:34.704930 | orchestrator | common : Restart fluentd container -------------------------------------- 6.46s 2026-02-02 05:02:34.704941 | orchestrator | common : Copying over config.json files for services -------------------- 5.42s 2026-02-02 05:02:34.704953 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.55s 2026-02-02 05:02:34.704964 | orchestrator | service-check-containers : common | Check containers -------------------- 4.54s 2026-02-02 05:02:34.704976 | orchestrator | common : Flush handlers ------------------------------------------------- 4.10s 2026-02-02 05:02:34.704987 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.07s 2026-02-02 05:02:34.705001 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.85s 2026-02-02 05:02:34.705023 | orchestrator | common : include_tasks -------------------------------------------------- 3.83s 2026-02-02 05:02:34.705036 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.77s 2026-02-02 05:02:34.705048 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.58s 2026-02-02 05:02:34.705062 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.43s 2026-02-02 05:02:34.705074 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.22s 2026-02-02 05:02:34.705087 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.21s 2026-02-02 05:02:34.705099 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.06s 2026-02-02 05:02:34.705111 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.03s 2026-02-02 05:02:34.705124 | orchestrator | common : include_tasks -------------------------------------------------- 2.98s 2026-02-02 05:02:34.705137 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.94s 2026-02-02 05:02:34.705150 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.76s 2026-02-02 05:02:34.705161 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.58s 2026-02-02 05:02:34.705171 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.54s 2026-02-02 05:02:34.705190 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-02 05:02:34.705202 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-02 05:02:34.705223 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-02 05:02:34.705234 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-02 05:02:34.705256 | orchestrator | 2026-02-02 05:02:34.705274 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-02 05:02:43.928536 | orchestrator | 2026-02-02 05:02:43.928672 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-02 05:02:43.928696 | orchestrator | Monday 02 February 2026 05:02:34 +0000 (0:00:01.664) 0:00:01.664 ******* 2026-02-02 05:02:43.928712 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 05:02:43.928727 | orchestrator | 2026-02-02 05:02:43.928742 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-02 05:02:43.928773 | orchestrator | Monday 02 February 2026 05:02:36 +0000 (0:00:02.188) 0:00:03.853 ******* 2026-02-02 05:02:43.928789 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:02:43.928804 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:02:43.928818 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:02:43.928831 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:02:43.928846 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:02:43.928860 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:02:43.928874 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-02 05:02:43.928888 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:02:43.928902 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:02:43.928917 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:02:43.928931 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:02:43.928945 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:02:43.928959 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:02:43.928973 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-02 05:02:43.928989 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:02:43.929005 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:02:43.929020 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:02:43.929034 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:02:43.929048 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:02:43.929062 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:02:43.929077 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-02 05:02:43.929092 | orchestrator | 2026-02-02 05:02:43.929106 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-02 05:02:43.929152 | orchestrator | Monday 02 February 2026 05:02:39 +0000 (0:00:02.128) 0:00:05.981 ******* 2026-02-02 05:02:43.929169 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 05:02:43.929186 | orchestrator | 2026-02-02 05:02:43.929202 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-02 05:02:43.929217 | orchestrator | Monday 02 February 2026 05:02:41 +0000 (0:00:02.198) 0:00:08.179 ******* 2026-02-02 05:02:43.929236 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:43.929256 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:43.929299 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:43.929324 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:43.929339 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:43.929348 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:43.929385 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:43.929409 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:43.929424 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:43.929452 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:45.403658 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:45.403736 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:45.403743 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:45.403763 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:45.403770 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:45.403779 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:45.403786 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:45.403807 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:45.403827 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:45.403835 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:45.403842 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:45.403856 | orchestrator | 2026-02-02 05:02:45.403865 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-02 05:02:45.403873 | orchestrator | Monday 02 February 2026 05:02:44 +0000 (0:00:03.351) 0:00:11.531 ******* 2026-02-02 05:02:45.403882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:45.403893 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:45.403901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:45.403909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:45.403927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:46.255757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:46.255874 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:02:46.255896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:46.255942 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:46.255959 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:02:46.255973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:46.255988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:46.256003 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:02:46.256016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:46.256032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:46.256070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:46.256085 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:02:46.256099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:46.256124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:46.256139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:46.256152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:46.256165 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:02:46.256232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:46.256249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:46.256263 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:02:46.256293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:48.526587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:48.526718 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:02:48.526737 | orchestrator | 2026-02-02 05:02:48.526752 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-02 05:02:48.526765 | orchestrator | Monday 02 February 2026 05:02:46 +0000 (0:00:01.670) 0:00:13.202 ******* 2026-02-02 05:02:48.526778 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:48.526793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:48.526806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:48.526818 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:48.526830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:48.526842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:48.526896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:48.526921 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:48.526933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:48.526944 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:02:48.526956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:48.526967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:48.526991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:48.527013 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:02:48.527024 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:02:48.527035 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:02:48.527047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:48.527091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:56.085179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:56.085283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:02:56.085297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:56.085308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:56.085320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:56.085330 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:02:56.085341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:56.085432 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:02:56.085462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:02:56.085479 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:02:56.085504 | orchestrator | 2026-02-02 05:02:56.085531 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-02 05:02:56.085550 | orchestrator | Monday 02 February 2026 05:02:48 +0000 (0:00:02.275) 0:00:15.478 ******* 2026-02-02 05:02:56.085591 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:02:56.085611 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:02:56.085630 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:02:56.085650 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:02:56.085667 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:02:56.085684 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:02:56.085695 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:02:56.085706 | orchestrator | 2026-02-02 05:02:56.085717 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-02 05:02:56.085731 | orchestrator | Monday 02 February 2026 05:02:49 +0000 (0:00:01.204) 0:00:16.683 ******* 2026-02-02 05:02:56.085744 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:02:56.085757 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:02:56.085771 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:02:56.085783 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:02:56.085796 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:02:56.085809 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:02:56.085822 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:02:56.085835 | orchestrator | 2026-02-02 05:02:56.085848 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-02 05:02:56.085860 | orchestrator | Monday 02 February 2026 05:02:50 +0000 (0:00:00.997) 0:00:17.681 ******* 2026-02-02 05:02:56.085874 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:02:56.085890 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:02:56.085910 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:02:56.085928 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:02:56.085947 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:02:56.085965 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:02:56.085985 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:02:56.086006 | orchestrator | 2026-02-02 05:02:56.086095 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-02 05:02:56.086107 | orchestrator | Monday 02 February 2026 05:02:51 +0000 (0:00:00.792) 0:00:18.473 ******* 2026-02-02 05:02:56.086121 | orchestrator | ok: [testbed-manager] 2026-02-02 05:02:56.086145 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:02:56.086172 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:02:56.086190 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:02:56.086208 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:02:56.086225 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:02:56.086242 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:02:56.086259 | orchestrator | 2026-02-02 05:02:56.086278 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-02 05:02:56.086296 | orchestrator | Monday 02 February 2026 05:02:53 +0000 (0:00:01.821) 0:00:20.294 ******* 2026-02-02 05:02:56.086316 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:56.086384 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:56.086398 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:56.086418 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:56.086444 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:57.109851 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:57.109963 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:57.109981 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:57.110082 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:02:57.110096 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:57.110123 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:57.110156 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:57.110169 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:57.110181 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:57.110205 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:57.110217 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:57.110228 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:57.110240 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:57.110252 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:57.110263 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:02:57.110282 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:10.217858 | orchestrator | 2026-02-02 05:03:10.217955 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-02 05:03:10.217968 | orchestrator | Monday 02 February 2026 05:02:57 +0000 (0:00:03.766) 0:00:24.061 ******* 2026-02-02 05:03:10.217977 | orchestrator | [WARNING]: Skipped 2026-02-02 05:03:10.217987 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-02 05:03:10.217996 | orchestrator | to this access issue: 2026-02-02 05:03:10.218004 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-02 05:03:10.218072 | orchestrator | directory 2026-02-02 05:03:10.218082 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 05:03:10.218092 | orchestrator | 2026-02-02 05:03:10.218100 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-02 05:03:10.218108 | orchestrator | Monday 02 February 2026 05:02:58 +0000 (0:00:01.327) 0:00:25.389 ******* 2026-02-02 05:03:10.218116 | orchestrator | [WARNING]: Skipped 2026-02-02 05:03:10.218124 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-02 05:03:10.218132 | orchestrator | to this access issue: 2026-02-02 05:03:10.218140 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-02 05:03:10.218148 | orchestrator | directory 2026-02-02 05:03:10.218157 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 05:03:10.218165 | orchestrator | 2026-02-02 05:03:10.218173 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-02 05:03:10.218181 | orchestrator | Monday 02 February 2026 05:02:59 +0000 (0:00:00.906) 0:00:26.295 ******* 2026-02-02 05:03:10.218189 | orchestrator | [WARNING]: Skipped 2026-02-02 05:03:10.218197 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-02 05:03:10.218205 | orchestrator | to this access issue: 2026-02-02 05:03:10.218213 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-02 05:03:10.218221 | orchestrator | directory 2026-02-02 05:03:10.218229 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 05:03:10.218237 | orchestrator | 2026-02-02 05:03:10.218245 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-02 05:03:10.218253 | orchestrator | Monday 02 February 2026 05:03:00 +0000 (0:00:00.910) 0:00:27.206 ******* 2026-02-02 05:03:10.218261 | orchestrator | [WARNING]: Skipped 2026-02-02 05:03:10.218269 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-02 05:03:10.218277 | orchestrator | to this access issue: 2026-02-02 05:03:10.218285 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-02 05:03:10.218293 | orchestrator | directory 2026-02-02 05:03:10.218301 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-02 05:03:10.218309 | orchestrator | 2026-02-02 05:03:10.218317 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-02 05:03:10.218344 | orchestrator | Monday 02 February 2026 05:03:01 +0000 (0:00:00.953) 0:00:28.160 ******* 2026-02-02 05:03:10.218381 | orchestrator | ok: [testbed-manager] 2026-02-02 05:03:10.218393 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:03:10.218407 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:03:10.218422 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:03:10.218436 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:03:10.218449 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:03:10.218464 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:03:10.218478 | orchestrator | 2026-02-02 05:03:10.218493 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-02 05:03:10.218504 | orchestrator | Monday 02 February 2026 05:03:04 +0000 (0:00:02.974) 0:00:31.134 ******* 2026-02-02 05:03:10.218514 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:03:10.218525 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:03:10.218535 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:03:10.218544 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:03:10.218553 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:03:10.218568 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:03:10.218585 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-02 05:03:10.218595 | orchestrator | 2026-02-02 05:03:10.218604 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-02 05:03:10.218613 | orchestrator | Monday 02 February 2026 05:03:06 +0000 (0:00:02.368) 0:00:33.502 ******* 2026-02-02 05:03:10.218622 | orchestrator | ok: [testbed-manager] 2026-02-02 05:03:10.218632 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:03:10.218641 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:03:10.218651 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:03:10.218660 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:03:10.218669 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:03:10.218678 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:03:10.218688 | orchestrator | 2026-02-02 05:03:10.218698 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-02 05:03:10.218708 | orchestrator | Monday 02 February 2026 05:03:08 +0000 (0:00:01.852) 0:00:35.355 ******* 2026-02-02 05:03:10.218733 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:10.218747 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:10.218755 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:10.218763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:10.218773 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:10.218782 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:10.218799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:10.218808 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:10.218823 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:17.793708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:17.793853 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:17.793872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:17.793909 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:17.793935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:17.793949 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:17.793963 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:17.793996 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:17.794008 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:17.794088 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:17.794101 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:17.794122 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:17.794134 | orchestrator | 2026-02-02 05:03:17.794147 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-02 05:03:17.794159 | orchestrator | Monday 02 February 2026 05:03:10 +0000 (0:00:01.976) 0:00:37.331 ******* 2026-02-02 05:03:17.794170 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:03:17.794187 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:03:17.794198 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:03:17.794208 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:03:17.794219 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:03:17.794230 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:03:17.794243 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-02 05:03:17.794257 | orchestrator | 2026-02-02 05:03:17.794269 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-02 05:03:17.794282 | orchestrator | Monday 02 February 2026 05:03:13 +0000 (0:00:02.660) 0:00:39.991 ******* 2026-02-02 05:03:17.794295 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:03:17.794308 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:03:17.794321 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:03:17.794333 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:03:17.794373 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:03:17.794388 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:03:17.794401 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-02 05:03:17.794414 | orchestrator | 2026-02-02 05:03:17.794424 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-02 05:03:17.794435 | orchestrator | Monday 02 February 2026 05:03:15 +0000 (0:00:02.283) 0:00:42.275 ******* 2026-02-02 05:03:17.794457 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:18.812709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:18.812836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:18.812853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:18.812880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:18.812893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:18.812905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-02 05:03:18.812916 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:18.812947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:18.812966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:18.812977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:18.812993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:18.813005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:18.813016 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:18.813028 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:18.813055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:20.626755 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:20.626858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:20.626875 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:20.626903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:20.626915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:03:20.626926 | orchestrator | 2026-02-02 05:03:20.626940 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-02 05:03:20.626952 | orchestrator | Monday 02 February 2026 05:03:18 +0000 (0:00:03.491) 0:00:45.767 ******* 2026-02-02 05:03:20.626964 | orchestrator | changed: [testbed-manager] => { 2026-02-02 05:03:20.626976 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:03:20.626987 | orchestrator | } 2026-02-02 05:03:20.626998 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 05:03:20.627009 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:03:20.627020 | orchestrator | } 2026-02-02 05:03:20.627030 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 05:03:20.627041 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:03:20.627051 | orchestrator | } 2026-02-02 05:03:20.627062 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 05:03:20.627072 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:03:20.627083 | orchestrator | } 2026-02-02 05:03:20.627094 | orchestrator | changed: [testbed-node-3] => { 2026-02-02 05:03:20.627104 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:03:20.627115 | orchestrator | } 2026-02-02 05:03:20.627126 | orchestrator | changed: [testbed-node-4] => { 2026-02-02 05:03:20.627136 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:03:20.627170 | orchestrator | } 2026-02-02 05:03:20.627181 | orchestrator | changed: [testbed-node-5] => { 2026-02-02 05:03:20.627192 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:03:20.627203 | orchestrator | } 2026-02-02 05:03:20.627213 | orchestrator | 2026-02-02 05:03:20.627225 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 05:03:20.627235 | orchestrator | Monday 02 February 2026 05:03:19 +0000 (0:00:01.189) 0:00:46.957 ******* 2026-02-02 05:03:20.627251 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:03:20.627311 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:20.627327 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:20.627341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:03:20.627386 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:03:20.627400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:20.627415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:20.627427 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:03:20.627441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:03:20.627463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:20.627476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:20.627512 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:03:23.380117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:03:23.380210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:23.380223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:23.380234 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:03:23.380247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:03:23.380255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:23.380279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:23.380287 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-02 05:03:23.380295 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-02 05:03:23.380310 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:03:23.380331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:03:23.380339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:23.380409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:23.380422 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:03:23.380429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-02 05:03:23.380440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:23.380454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:03:23.380461 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:03:23.380468 | orchestrator | 2026-02-02 05:03:23.380476 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:03:23.380483 | orchestrator | Monday 02 February 2026 05:03:22 +0000 (0:00:02.430) 0:00:49.388 ******* 2026-02-02 05:03:23.380489 | orchestrator | 2026-02-02 05:03:23.380496 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:03:23.380503 | orchestrator | Monday 02 February 2026 05:03:22 +0000 (0:00:00.095) 0:00:49.484 ******* 2026-02-02 05:03:23.380509 | orchestrator | 2026-02-02 05:03:23.380516 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:03:23.380523 | orchestrator | Monday 02 February 2026 05:03:22 +0000 (0:00:00.081) 0:00:49.565 ******* 2026-02-02 05:03:23.380529 | orchestrator | 2026-02-02 05:03:23.380536 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:03:23.380542 | orchestrator | Monday 02 February 2026 05:03:22 +0000 (0:00:00.076) 0:00:49.642 ******* 2026-02-02 05:03:23.380549 | orchestrator | 2026-02-02 05:03:23.380556 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:03:23.380562 | orchestrator | Monday 02 February 2026 05:03:22 +0000 (0:00:00.073) 0:00:49.716 ******* 2026-02-02 05:03:23.380569 | orchestrator | 2026-02-02 05:03:23.380575 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:03:23.380582 | orchestrator | Monday 02 February 2026 05:03:23 +0000 (0:00:00.375) 0:00:50.091 ******* 2026-02-02 05:03:23.380589 | orchestrator | 2026-02-02 05:03:23.380595 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-02 05:03:23.380602 | orchestrator | Monday 02 February 2026 05:03:23 +0000 (0:00:00.098) 0:00:50.190 ******* 2026-02-02 05:03:23.380609 | orchestrator | 2026-02-02 05:03:23.380615 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-02 05:03:23.380628 | orchestrator | Monday 02 February 2026 05:03:23 +0000 (0:00:00.116) 0:00:50.306 ******* 2026-02-02 05:04:50.228727 | orchestrator | changed: [testbed-manager] 2026-02-02 05:04:50.228849 | orchestrator | changed: [testbed-node-4] 2026-02-02 05:04:50.228865 | orchestrator | changed: [testbed-node-5] 2026-02-02 05:04:50.228877 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:04:50.228888 | orchestrator | changed: [testbed-node-3] 2026-02-02 05:04:50.228899 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:04:50.228910 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:04:50.228922 | orchestrator | 2026-02-02 05:04:50.228935 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-02 05:04:50.228948 | orchestrator | Monday 02 February 2026 05:03:59 +0000 (0:00:36.274) 0:01:26.581 ******* 2026-02-02 05:04:50.228959 | orchestrator | changed: [testbed-node-4] 2026-02-02 05:04:50.228970 | orchestrator | changed: [testbed-node-5] 2026-02-02 05:04:50.228981 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:04:50.228992 | orchestrator | changed: [testbed-manager] 2026-02-02 05:04:50.229004 | orchestrator | changed: [testbed-node-3] 2026-02-02 05:04:50.229015 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:04:50.229049 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:04:50.229061 | orchestrator | 2026-02-02 05:04:50.229072 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-02 05:04:50.229083 | orchestrator | Monday 02 February 2026 05:04:35 +0000 (0:00:36.235) 0:02:02.816 ******* 2026-02-02 05:04:50.229094 | orchestrator | ok: [testbed-manager] 2026-02-02 05:04:50.229106 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:04:50.229117 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:04:50.229127 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:04:50.229138 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:04:50.229149 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:04:50.229159 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:04:50.229170 | orchestrator | 2026-02-02 05:04:50.229181 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-02 05:04:50.229193 | orchestrator | Monday 02 February 2026 05:04:37 +0000 (0:00:01.964) 0:02:04.781 ******* 2026-02-02 05:04:50.229204 | orchestrator | changed: [testbed-manager] 2026-02-02 05:04:50.229215 | orchestrator | changed: [testbed-node-3] 2026-02-02 05:04:50.229226 | orchestrator | changed: [testbed-node-4] 2026-02-02 05:04:50.229236 | orchestrator | changed: [testbed-node-5] 2026-02-02 05:04:50.229247 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:04:50.229258 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:04:50.229269 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:04:50.229280 | orchestrator | 2026-02-02 05:04:50.229291 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 05:04:50.229318 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 05:04:50.229407 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 05:04:50.229422 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 05:04:50.229433 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 05:04:50.229444 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 05:04:50.229455 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 05:04:50.229466 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 05:04:50.229477 | orchestrator | 2026-02-02 05:04:50.229488 | orchestrator | 2026-02-02 05:04:50.229499 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 05:04:50.229510 | orchestrator | Monday 02 February 2026 05:04:49 +0000 (0:00:11.760) 0:02:16.542 ******* 2026-02-02 05:04:50.229521 | orchestrator | =============================================================================== 2026-02-02 05:04:50.229532 | orchestrator | common : Restart fluentd container ------------------------------------- 36.27s 2026-02-02 05:04:50.229543 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 36.24s 2026-02-02 05:04:50.229554 | orchestrator | common : Restart cron container ---------------------------------------- 11.76s 2026-02-02 05:04:50.229565 | orchestrator | common : Copying over config.json files for services -------------------- 3.77s 2026-02-02 05:04:50.229576 | orchestrator | service-check-containers : common | Check containers -------------------- 3.49s 2026-02-02 05:04:50.229587 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.35s 2026-02-02 05:04:50.229598 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.97s 2026-02-02 05:04:50.229608 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.66s 2026-02-02 05:04:50.229631 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.43s 2026-02-02 05:04:50.229642 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.37s 2026-02-02 05:04:50.229653 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.28s 2026-02-02 05:04:50.229663 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.28s 2026-02-02 05:04:50.229674 | orchestrator | common : include_tasks -------------------------------------------------- 2.20s 2026-02-02 05:04:50.229685 | orchestrator | common : include_tasks -------------------------------------------------- 2.19s 2026-02-02 05:04:50.229715 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.13s 2026-02-02 05:04:50.229727 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.98s 2026-02-02 05:04:50.229738 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.96s 2026-02-02 05:04:50.229748 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.85s 2026-02-02 05:04:50.229759 | orchestrator | common : Copying over kolla.target -------------------------------------- 1.82s 2026-02-02 05:04:50.229770 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.67s 2026-02-02 05:04:50.631221 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-02 05:04:52.825299 | orchestrator | 2026-02-02 05:04:52 | INFO  | Task 87cb3479-4f32-45a2-922e-01d8f7c70f43 (loadbalancer) was prepared for execution. 2026-02-02 05:04:52.825479 | orchestrator | 2026-02-02 05:04:52 | INFO  | It takes a moment until task 87cb3479-4f32-45a2-922e-01d8f7c70f43 (loadbalancer) has been started and output is visible here. 2026-02-02 05:05:13.512606 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-02 05:05:13.512713 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-02 05:05:13.512742 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-02 05:05:13.512753 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-02 05:05:13.512777 | orchestrator | 2026-02-02 05:05:13.512790 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 05:05:13.512801 | orchestrator | 2026-02-02 05:05:13.512812 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 05:05:13.512823 | orchestrator | Monday 02 February 2026 05:04:58 +0000 (0:00:01.105) 0:00:01.105 ******* 2026-02-02 05:05:13.512834 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:05:13.512847 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:05:13.512858 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:05:13.512869 | orchestrator | 2026-02-02 05:05:13.512890 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 05:05:13.512901 | orchestrator | Monday 02 February 2026 05:04:59 +0000 (0:00:00.811) 0:00:01.916 ******* 2026-02-02 05:05:13.512912 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-02 05:05:13.512923 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-02 05:05:13.512935 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-02 05:05:13.512946 | orchestrator | 2026-02-02 05:05:13.512957 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-02 05:05:13.512968 | orchestrator | 2026-02-02 05:05:13.512979 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-02 05:05:13.512990 | orchestrator | Monday 02 February 2026 05:05:00 +0000 (0:00:00.924) 0:00:02.841 ******* 2026-02-02 05:05:13.513001 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:05:13.513031 | orchestrator | 2026-02-02 05:05:13.513043 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-02 05:05:13.513054 | orchestrator | Monday 02 February 2026 05:05:01 +0000 (0:00:01.095) 0:00:03.937 ******* 2026-02-02 05:05:13.513064 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:05:13.513075 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:05:13.513086 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:05:13.513097 | orchestrator | 2026-02-02 05:05:13.513108 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-02 05:05:13.513119 | orchestrator | Monday 02 February 2026 05:05:02 +0000 (0:00:01.213) 0:00:05.150 ******* 2026-02-02 05:05:13.513133 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:05:13.513145 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:05:13.513158 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:05:13.513172 | orchestrator | 2026-02-02 05:05:13.513185 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-02 05:05:13.513198 | orchestrator | Monday 02 February 2026 05:05:03 +0000 (0:00:01.034) 0:00:06.185 ******* 2026-02-02 05:05:13.513211 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:05:13.513224 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:05:13.513236 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:05:13.513249 | orchestrator | 2026-02-02 05:05:13.513261 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-02 05:05:13.513274 | orchestrator | Monday 02 February 2026 05:05:04 +0000 (0:00:00.636) 0:00:06.821 ******* 2026-02-02 05:05:13.513287 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:05:13.513299 | orchestrator | 2026-02-02 05:05:13.513313 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-02 05:05:13.513360 | orchestrator | Monday 02 February 2026 05:05:05 +0000 (0:00:01.191) 0:00:08.012 ******* 2026-02-02 05:05:13.513381 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:05:13.513401 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:05:13.513421 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:05:13.513440 | orchestrator | 2026-02-02 05:05:13.513457 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-02 05:05:13.513471 | orchestrator | Monday 02 February 2026 05:05:05 +0000 (0:00:00.644) 0:00:08.657 ******* 2026-02-02 05:05:13.513484 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-02 05:05:13.513498 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-02 05:05:13.513510 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-02 05:05:13.513520 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-02 05:05:13.513531 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-02 05:05:13.513542 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-02 05:05:13.513552 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-02 05:05:13.513564 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-02 05:05:13.513575 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-02 05:05:13.513586 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-02 05:05:13.513597 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-02 05:05:13.513626 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-02 05:05:13.513637 | orchestrator | 2026-02-02 05:05:13.513648 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-02 05:05:13.513659 | orchestrator | Monday 02 February 2026 05:05:08 +0000 (0:00:02.406) 0:00:11.063 ******* 2026-02-02 05:05:13.513678 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-02 05:05:13.513690 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-02 05:05:13.513700 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-02 05:05:13.513711 | orchestrator | 2026-02-02 05:05:13.513722 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-02 05:05:13.513733 | orchestrator | Monday 02 February 2026 05:05:09 +0000 (0:00:00.977) 0:00:12.041 ******* 2026-02-02 05:05:13.513744 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-02 05:05:13.513755 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-02 05:05:13.513765 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-02 05:05:13.513776 | orchestrator | 2026-02-02 05:05:13.513787 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-02 05:05:13.513798 | orchestrator | Monday 02 February 2026 05:05:10 +0000 (0:00:01.184) 0:00:13.226 ******* 2026-02-02 05:05:13.513809 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-02 05:05:13.513820 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:05:13.513835 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-02 05:05:13.513847 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:05:13.513857 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-02 05:05:13.513868 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:05:13.513879 | orchestrator | 2026-02-02 05:05:13.513890 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-02 05:05:13.513901 | orchestrator | Monday 02 February 2026 05:05:11 +0000 (0:00:01.203) 0:00:14.429 ******* 2026-02-02 05:05:13.513915 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:13.513931 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:13.513943 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:13.513955 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:13.513981 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:19.560202 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:19.560300 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:05:19.560313 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:05:19.560382 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:05:19.560393 | orchestrator | 2026-02-02 05:05:19.560404 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-02 05:05:19.560413 | orchestrator | Monday 02 February 2026 05:05:13 +0000 (0:00:01.729) 0:00:16.159 ******* 2026-02-02 05:05:19.560421 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:05:19.560430 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:05:19.560438 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:05:19.560446 | orchestrator | 2026-02-02 05:05:19.560454 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-02 05:05:19.560462 | orchestrator | Monday 02 February 2026 05:05:14 +0000 (0:00:00.982) 0:00:17.141 ******* 2026-02-02 05:05:19.560470 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-02 05:05:19.560496 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-02 05:05:19.560504 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-02 05:05:19.560512 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-02 05:05:19.560520 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-02 05:05:19.560527 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-02 05:05:19.560535 | orchestrator | 2026-02-02 05:05:19.560543 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-02 05:05:19.560551 | orchestrator | Monday 02 February 2026 05:05:16 +0000 (0:00:01.756) 0:00:18.897 ******* 2026-02-02 05:05:19.560559 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:05:19.560566 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:05:19.560574 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:05:19.560583 | orchestrator | 2026-02-02 05:05:19.560591 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-02 05:05:19.560598 | orchestrator | Monday 02 February 2026 05:05:17 +0000 (0:00:01.232) 0:00:20.130 ******* 2026-02-02 05:05:19.560606 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:05:19.560614 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:05:19.560622 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:05:19.560629 | orchestrator | 2026-02-02 05:05:19.560637 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-02 05:05:19.560645 | orchestrator | Monday 02 February 2026 05:05:18 +0000 (0:00:01.309) 0:00:21.439 ******* 2026-02-02 05:05:19.560670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 05:05:19.560685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:05:19.560694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:19.560704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__79d76fa706b95d54b775d51da90d8b3545f40c5e', '__omit_place_holder__79d76fa706b95d54b775d51da90d8b3545f40c5e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 05:05:19.560719 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:05:19.560728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 05:05:19.560739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:05:19.560748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:19.560769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__79d76fa706b95d54b775d51da90d8b3545f40c5e', '__omit_place_holder__79d76fa706b95d54b775d51da90d8b3545f40c5e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 05:05:22.748358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 05:05:22.748443 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:05:22.748456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:05:22.748484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:22.748493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__79d76fa706b95d54b775d51da90d8b3545f40c5e', '__omit_place_holder__79d76fa706b95d54b775d51da90d8b3545f40c5e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 05:05:22.748501 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:05:22.748508 | orchestrator | 2026-02-02 05:05:22.748517 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-02 05:05:22.748525 | orchestrator | Monday 02 February 2026 05:05:19 +0000 (0:00:00.766) 0:00:22.205 ******* 2026-02-02 05:05:22.748533 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:22.748555 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:22.748563 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:22.748571 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:22.748584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:22.748600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__79d76fa706b95d54b775d51da90d8b3545f40c5e', '__omit_place_holder__79d76fa706b95d54b775d51da90d8b3545f40c5e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 05:05:22.748608 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:22.748615 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:22.748632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:28.185841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:28.185955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__79d76fa706b95d54b775d51da90d8b3545f40c5e', '__omit_place_holder__79d76fa706b95d54b775d51da90d8b3545f40c5e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 05:05:28.185971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__79d76fa706b95d54b775d51da90d8b3545f40c5e', '__omit_place_holder__79d76fa706b95d54b775d51da90d8b3545f40c5e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-02 05:05:28.185981 | orchestrator | 2026-02-02 05:05:28.185989 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-02 05:05:28.185995 | orchestrator | Monday 02 February 2026 05:05:22 +0000 (0:00:03.186) 0:00:25.392 ******* 2026-02-02 05:05:28.186000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:28.186009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:28.186080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:28.186107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:28.186124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:28.186133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:28.186139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:05:28.186147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:05:28.186155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:05:28.186164 | orchestrator | 2026-02-02 05:05:28.186172 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-02 05:05:28.186186 | orchestrator | Monday 02 February 2026 05:05:26 +0000 (0:00:03.787) 0:00:29.179 ******* 2026-02-02 05:05:28.186194 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-02 05:05:28.186205 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-02 05:05:28.186210 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-02 05:05:28.186219 | orchestrator | 2026-02-02 05:05:28.186229 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-02 05:05:45.284607 | orchestrator | Monday 02 February 2026 05:05:28 +0000 (0:00:01.655) 0:00:30.835 ******* 2026-02-02 05:05:45.284723 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-02 05:05:45.284740 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-02 05:05:45.284752 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-02 05:05:45.284763 | orchestrator | 2026-02-02 05:05:45.284776 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-02 05:05:45.284787 | orchestrator | Monday 02 February 2026 05:05:31 +0000 (0:00:03.461) 0:00:34.297 ******* 2026-02-02 05:05:45.284798 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:05:45.284810 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:05:45.284821 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:05:45.284832 | orchestrator | 2026-02-02 05:05:45.284843 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-02 05:05:45.284854 | orchestrator | Monday 02 February 2026 05:05:32 +0000 (0:00:01.174) 0:00:35.472 ******* 2026-02-02 05:05:45.284865 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-02 05:05:45.284876 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-02 05:05:45.284887 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-02 05:05:45.284898 | orchestrator | 2026-02-02 05:05:45.284909 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-02 05:05:45.284920 | orchestrator | Monday 02 February 2026 05:05:34 +0000 (0:00:02.022) 0:00:37.494 ******* 2026-02-02 05:05:45.284931 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-02 05:05:45.284942 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-02 05:05:45.284953 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-02 05:05:45.284964 | orchestrator | 2026-02-02 05:05:45.284975 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-02 05:05:45.284986 | orchestrator | Monday 02 February 2026 05:05:36 +0000 (0:00:01.727) 0:00:39.222 ******* 2026-02-02 05:05:45.284998 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:05:45.285009 | orchestrator | 2026-02-02 05:05:45.285020 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-02 05:05:45.285031 | orchestrator | Monday 02 February 2026 05:05:37 +0000 (0:00:01.221) 0:00:40.443 ******* 2026-02-02 05:05:45.285043 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-02 05:05:45.285054 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-02 05:05:45.285065 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-02 05:05:45.285076 | orchestrator | 2026-02-02 05:05:45.285087 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-02 05:05:45.285097 | orchestrator | Monday 02 February 2026 05:05:39 +0000 (0:00:01.592) 0:00:42.036 ******* 2026-02-02 05:05:45.285108 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-02 05:05:45.285119 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-02 05:05:45.285130 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-02 05:05:45.285141 | orchestrator | 2026-02-02 05:05:45.285155 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-02 05:05:45.285194 | orchestrator | Monday 02 February 2026 05:05:40 +0000 (0:00:01.572) 0:00:43.608 ******* 2026-02-02 05:05:45.285209 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:05:45.285222 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:05:45.285234 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:05:45.285247 | orchestrator | 2026-02-02 05:05:45.285260 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-02 05:05:45.285273 | orchestrator | Monday 02 February 2026 05:05:41 +0000 (0:00:00.312) 0:00:43.920 ******* 2026-02-02 05:05:45.285286 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:05:45.285298 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:05:45.285311 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:05:45.285359 | orchestrator | 2026-02-02 05:05:45.285373 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-02 05:05:45.285387 | orchestrator | Monday 02 February 2026 05:05:42 +0000 (0:00:00.960) 0:00:44.881 ******* 2026-02-02 05:05:45.285418 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:45.285453 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:45.285466 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:45.285478 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:45.285489 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:45.285509 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:45.285526 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:05:45.285546 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:05:47.389039 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:05:47.389130 | orchestrator | 2026-02-02 05:05:47.389140 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-02 05:05:47.389147 | orchestrator | Monday 02 February 2026 05:05:45 +0000 (0:00:03.047) 0:00:47.928 ******* 2026-02-02 05:05:47.389154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 05:05:47.389161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:05:47.389183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:47.389189 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:05:47.389195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 05:05:47.389211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:05:47.389229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:47.389234 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:05:47.389239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 05:05:47.389244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:05:47.389253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:47.389258 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:05:47.389263 | orchestrator | 2026-02-02 05:05:47.389268 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-02 05:05:47.389273 | orchestrator | Monday 02 February 2026 05:05:45 +0000 (0:00:00.722) 0:00:48.651 ******* 2026-02-02 05:05:47.389278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 05:05:47.389286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:05:47.389296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:54.869870 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:05:54.869957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 05:05:54.869970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:05:54.869996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:54.870004 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:05:54.870011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 05:05:54.870060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:05:54.870079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:54.870086 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:05:54.870093 | orchestrator | 2026-02-02 05:05:54.870101 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-02 05:05:54.870109 | orchestrator | Monday 02 February 2026 05:05:47 +0000 (0:00:01.388) 0:00:50.039 ******* 2026-02-02 05:05:54.870116 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-02 05:05:54.870136 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-02 05:05:54.870144 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-02 05:05:54.870150 | orchestrator | 2026-02-02 05:05:54.870157 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-02 05:05:54.870164 | orchestrator | Monday 02 February 2026 05:05:48 +0000 (0:00:01.477) 0:00:51.517 ******* 2026-02-02 05:05:54.870170 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-02 05:05:54.870177 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-02 05:05:54.870190 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-02 05:05:54.870197 | orchestrator | 2026-02-02 05:05:54.870204 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-02 05:05:54.870210 | orchestrator | Monday 02 February 2026 05:05:50 +0000 (0:00:01.485) 0:00:53.003 ******* 2026-02-02 05:05:54.870217 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 05:05:54.870224 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 05:05:54.870231 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-02 05:05:54.870238 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 05:05:54.870244 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:05:54.870251 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 05:05:54.870258 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:05:54.870265 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-02 05:05:54.870271 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:05:54.870278 | orchestrator | 2026-02-02 05:05:54.870285 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-02 05:05:54.870292 | orchestrator | Monday 02 February 2026 05:05:51 +0000 (0:00:01.544) 0:00:54.547 ******* 2026-02-02 05:05:54.870299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:54.870306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:54.870316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 05:05:54.870375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:56.561454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:56.561602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:05:56.561631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:05:56.561654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:05:56.561674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:05:56.561695 | orchestrator | 2026-02-02 05:05:56.561718 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-02 05:05:56.561740 | orchestrator | Monday 02 February 2026 05:05:54 +0000 (0:00:02.965) 0:00:57.512 ******* 2026-02-02 05:05:56.561761 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 05:05:56.561782 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:05:56.561803 | orchestrator | } 2026-02-02 05:05:56.561823 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 05:05:56.561839 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:05:56.561857 | orchestrator | } 2026-02-02 05:05:56.561877 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 05:05:56.561928 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:05:56.561946 | orchestrator | } 2026-02-02 05:05:56.561965 | orchestrator | 2026-02-02 05:05:56.561984 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 05:05:56.562004 | orchestrator | Monday 02 February 2026 05:05:55 +0000 (0:00:00.388) 0:00:57.901 ******* 2026-02-02 05:05:56.562114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 05:05:56.562130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:05:56.562145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:56.562160 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:05:56.562188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 05:05:56.562203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:05:56.562221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:05:56.562245 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:05:56.562259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 05:05:56.562284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:06:01.502998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:06:01.503108 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:01.503125 | orchestrator | 2026-02-02 05:06:01.503139 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-02 05:06:01.503151 | orchestrator | Monday 02 February 2026 05:05:56 +0000 (0:00:01.304) 0:00:59.206 ******* 2026-02-02 05:06:01.503162 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:06:01.503173 | orchestrator | 2026-02-02 05:06:01.503184 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-02 05:06:01.503196 | orchestrator | Monday 02 February 2026 05:05:57 +0000 (0:00:01.223) 0:01:00.430 ******* 2026-02-02 05:06:01.503211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:06:01.503226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 05:06:01.503277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 05:06:01.503291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 05:06:01.503397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:06:01.503416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 05:06:01.503428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 05:06:01.503440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 05:06:01.503466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:06:01.503487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 05:06:02.231108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 05:06:02.231246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 05:06:02.231276 | orchestrator | 2026-02-02 05:06:02.231299 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-02 05:06:02.231371 | orchestrator | Monday 02 February 2026 05:06:01 +0000 (0:00:03.835) 0:01:04.265 ******* 2026-02-02 05:06:02.231389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:06:02.231443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 05:06:02.231456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 05:06:02.231489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 05:06:02.231502 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:02.231515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:06:02.231554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 05:06:02.231575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 05:06:02.231592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 05:06:02.231608 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:02.231628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:06:02.231660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-02 05:06:11.835937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-02 05:06:11.836032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-02 05:06:11.836066 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:11.836077 | orchestrator | 2026-02-02 05:06:11.836086 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-02 05:06:11.836095 | orchestrator | Monday 02 February 2026 05:06:02 +0000 (0:00:00.726) 0:01:04.992 ******* 2026-02-02 05:06:11.836104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:11.836114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:11.836123 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:11.836143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:11.836150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:11.836158 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:11.836165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:11.836172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:11.836180 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:11.836187 | orchestrator | 2026-02-02 05:06:11.836195 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-02 05:06:11.836202 | orchestrator | Monday 02 February 2026 05:06:03 +0000 (0:00:01.515) 0:01:06.508 ******* 2026-02-02 05:06:11.836209 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:06:11.836218 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:06:11.836225 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:06:11.836232 | orchestrator | 2026-02-02 05:06:11.836239 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-02 05:06:11.836246 | orchestrator | Monday 02 February 2026 05:06:05 +0000 (0:00:01.234) 0:01:07.742 ******* 2026-02-02 05:06:11.836253 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:06:11.836260 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:06:11.836268 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:06:11.836275 | orchestrator | 2026-02-02 05:06:11.836282 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-02 05:06:11.836289 | orchestrator | Monday 02 February 2026 05:06:07 +0000 (0:00:02.128) 0:01:09.870 ******* 2026-02-02 05:06:11.836296 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:06:11.836304 | orchestrator | 2026-02-02 05:06:11.836311 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-02 05:06:11.836407 | orchestrator | Monday 02 February 2026 05:06:08 +0000 (0:00:00.893) 0:01:10.764 ******* 2026-02-02 05:06:11.836445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:06:11.836457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 05:06:11.836473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:06:11.836484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:06:11.836493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 05:06:11.836515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:06:12.480953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:06:12.481045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 05:06:12.481056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:06:12.481063 | orchestrator | 2026-02-02 05:06:12.481070 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-02 05:06:12.481077 | orchestrator | Monday 02 February 2026 05:06:11 +0000 (0:00:03.721) 0:01:14.485 ******* 2026-02-02 05:06:12.481085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:06:12.481136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 05:06:12.481144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:06:12.481150 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:12.481162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:06:12.481168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 05:06:12.481175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:06:12.481186 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:12.481197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:06:22.706302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-02 05:06:22.706472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:06:22.706521 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:22.706536 | orchestrator | 2026-02-02 05:06:22.706549 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-02 05:06:22.706561 | orchestrator | Monday 02 February 2026 05:06:12 +0000 (0:00:00.642) 0:01:15.128 ******* 2026-02-02 05:06:22.706573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:22.706587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:22.706600 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:22.706611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:22.706649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:22.706661 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:22.706672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:22.706684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:22.706695 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:22.706705 | orchestrator | 2026-02-02 05:06:22.706717 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-02 05:06:22.706728 | orchestrator | Monday 02 February 2026 05:06:13 +0000 (0:00:01.166) 0:01:16.294 ******* 2026-02-02 05:06:22.706739 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:06:22.706750 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:06:22.706761 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:06:22.706772 | orchestrator | 2026-02-02 05:06:22.706784 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-02 05:06:22.706798 | orchestrator | Monday 02 February 2026 05:06:14 +0000 (0:00:01.229) 0:01:17.524 ******* 2026-02-02 05:06:22.706811 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:06:22.706823 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:06:22.706837 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:06:22.706850 | orchestrator | 2026-02-02 05:06:22.706891 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-02 05:06:22.706906 | orchestrator | Monday 02 February 2026 05:06:16 +0000 (0:00:02.125) 0:01:19.650 ******* 2026-02-02 05:06:22.706920 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:22.706937 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:22.706955 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:22.706974 | orchestrator | 2026-02-02 05:06:22.706993 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-02 05:06:22.707030 | orchestrator | Monday 02 February 2026 05:06:17 +0000 (0:00:00.349) 0:01:19.999 ******* 2026-02-02 05:06:22.707044 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:06:22.707056 | orchestrator | 2026-02-02 05:06:22.707070 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-02 05:06:22.707082 | orchestrator | Monday 02 February 2026 05:06:18 +0000 (0:00:00.942) 0:01:20.942 ******* 2026-02-02 05:06:22.707098 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-02 05:06:22.707115 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-02 05:06:22.707140 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-02 05:06:22.707152 | orchestrator | 2026-02-02 05:06:22.707164 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-02 05:06:22.707176 | orchestrator | Monday 02 February 2026 05:06:21 +0000 (0:00:02.739) 0:01:23.681 ******* 2026-02-02 05:06:22.707187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-02 05:06:22.707198 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:22.707218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-02 05:06:31.456507 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:31.456662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-02 05:06:31.456710 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:31.456724 | orchestrator | 2026-02-02 05:06:31.456737 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-02 05:06:31.456749 | orchestrator | Monday 02 February 2026 05:06:22 +0000 (0:00:01.677) 0:01:25.358 ******* 2026-02-02 05:06:31.456762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-02 05:06:31.456776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-02 05:06:31.456789 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:31.456800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-02 05:06:31.456812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-02 05:06:31.456823 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:31.456834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-02 05:06:31.456845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-02 05:06:31.456856 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:31.456867 | orchestrator | 2026-02-02 05:06:31.456879 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-02 05:06:31.456890 | orchestrator | Monday 02 February 2026 05:06:24 +0000 (0:00:02.101) 0:01:27.460 ******* 2026-02-02 05:06:31.456900 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:31.456911 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:31.456922 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:31.456933 | orchestrator | 2026-02-02 05:06:31.456944 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-02 05:06:31.456983 | orchestrator | Monday 02 February 2026 05:06:25 +0000 (0:00:00.465) 0:01:27.926 ******* 2026-02-02 05:06:31.456997 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:31.457010 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:31.457023 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:31.457036 | orchestrator | 2026-02-02 05:06:31.457049 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-02 05:06:31.457063 | orchestrator | Monday 02 February 2026 05:06:26 +0000 (0:00:01.397) 0:01:29.323 ******* 2026-02-02 05:06:31.457075 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:06:31.457087 | orchestrator | 2026-02-02 05:06:31.457101 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-02 05:06:31.457113 | orchestrator | Monday 02 February 2026 05:06:27 +0000 (0:00:01.036) 0:01:30.360 ******* 2026-02-02 05:06:31.457134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:06:31.457151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:06:31.457169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 05:06:31.457184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 05:06:31.457214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:06:32.153814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:06:32.153906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 05:06:32.153919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 05:06:32.153931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:06:32.153960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:06:32.153994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 05:06:32.154004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 05:06:32.154014 | orchestrator | 2026-02-02 05:06:32.154074 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-02 05:06:32.154085 | orchestrator | Monday 02 February 2026 05:06:31 +0000 (0:00:03.867) 0:01:34.228 ******* 2026-02-02 05:06:32.154095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:06:32.154106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:06:32.154122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 05:06:32.154142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 05:06:33.515971 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:33.516098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:06:33.516123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:06:33.516137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 05:06:33.516197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 05:06:33.516211 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:33.516268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:06:33.516283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:06:33.516295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-02 05:06:33.516307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-02 05:06:33.516390 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:33.516404 | orchestrator | 2026-02-02 05:06:33.516417 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-02 05:06:33.516429 | orchestrator | Monday 02 February 2026 05:06:32 +0000 (0:00:00.690) 0:01:34.918 ******* 2026-02-02 05:06:33.516499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:33.516519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:33.516539 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:33.516558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:33.516577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:33.516608 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:33.516647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:33.516687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:42.577184 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:42.577288 | orchestrator | 2026-02-02 05:06:42.577303 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-02 05:06:42.577367 | orchestrator | Monday 02 February 2026 05:06:33 +0000 (0:00:01.246) 0:01:36.164 ******* 2026-02-02 05:06:42.577379 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:06:42.577389 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:06:42.577399 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:06:42.577409 | orchestrator | 2026-02-02 05:06:42.577419 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-02 05:06:42.577429 | orchestrator | Monday 02 February 2026 05:06:34 +0000 (0:00:01.235) 0:01:37.399 ******* 2026-02-02 05:06:42.577439 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:06:42.577462 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:06:42.577481 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:06:42.577491 | orchestrator | 2026-02-02 05:06:42.577501 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-02 05:06:42.577511 | orchestrator | Monday 02 February 2026 05:06:36 +0000 (0:00:02.094) 0:01:39.493 ******* 2026-02-02 05:06:42.577520 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:42.577530 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:42.577540 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:42.577549 | orchestrator | 2026-02-02 05:06:42.577559 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-02 05:06:42.577592 | orchestrator | Monday 02 February 2026 05:06:37 +0000 (0:00:00.590) 0:01:40.083 ******* 2026-02-02 05:06:42.577602 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:42.577611 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:42.577621 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:42.577631 | orchestrator | 2026-02-02 05:06:42.577640 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-02 05:06:42.577650 | orchestrator | Monday 02 February 2026 05:06:37 +0000 (0:00:00.330) 0:01:40.414 ******* 2026-02-02 05:06:42.577660 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:06:42.577669 | orchestrator | 2026-02-02 05:06:42.577678 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-02 05:06:42.577688 | orchestrator | Monday 02 February 2026 05:06:38 +0000 (0:00:00.842) 0:01:41.257 ******* 2026-02-02 05:06:42.577703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:06:42.577719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 05:06:42.577747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 05:06:42.577778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 05:06:42.577790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 05:06:42.577810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:06:42.577822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 05:06:42.577834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:06:42.577851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 05:06:42.577870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.507390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.507504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.507519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.507531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.507559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:06:43.507591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 05:06:43.507624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.507634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.507645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.507655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.507665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.507676 | orchestrator | 2026-02-02 05:06:43.507693 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-02 05:06:43.507704 | orchestrator | Monday 02 February 2026 05:06:42 +0000 (0:00:04.285) 0:01:45.542 ******* 2026-02-02 05:06:43.507722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:06:43.812532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 05:06:43.812630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.812643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.812653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.812679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.812725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:06:43.812738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.812747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 05:06:43.812757 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:43.812768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.812778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.812787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.812803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:06:43.812818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 05:06:54.647273 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:54.648454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:06:54.648527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-02 05:06:54.648543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-02 05:06:54.648579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-02 05:06:54.648597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-02 05:06:54.648641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:06:54.648662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-02 05:06:54.648678 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:54.648697 | orchestrator | 2026-02-02 05:06:54.648716 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-02 05:06:54.648734 | orchestrator | Monday 02 February 2026 05:06:43 +0000 (0:00:00.924) 0:01:46.466 ******* 2026-02-02 05:06:54.648753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:54.648774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:54.648794 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:54.648811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:54.648840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:54.648859 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:54.648878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:54.648897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:06:54.648915 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:54.648932 | orchestrator | 2026-02-02 05:06:54.648943 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-02 05:06:54.648953 | orchestrator | Monday 02 February 2026 05:06:45 +0000 (0:00:01.310) 0:01:47.776 ******* 2026-02-02 05:06:54.648963 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:06:54.648973 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:06:54.648983 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:06:54.648992 | orchestrator | 2026-02-02 05:06:54.649002 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-02 05:06:54.649021 | orchestrator | Monday 02 February 2026 05:06:46 +0000 (0:00:01.206) 0:01:48.983 ******* 2026-02-02 05:06:54.649036 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:06:54.649052 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:06:54.649068 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:06:54.649082 | orchestrator | 2026-02-02 05:06:54.649093 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-02 05:06:54.649102 | orchestrator | Monday 02 February 2026 05:06:48 +0000 (0:00:02.211) 0:01:51.194 ******* 2026-02-02 05:06:54.649112 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:54.649121 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:54.649131 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:06:54.649140 | orchestrator | 2026-02-02 05:06:54.649150 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-02 05:06:54.649160 | orchestrator | Monday 02 February 2026 05:06:48 +0000 (0:00:00.351) 0:01:51.546 ******* 2026-02-02 05:06:54.649170 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:06:54.649179 | orchestrator | 2026-02-02 05:06:54.649189 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-02 05:06:54.649198 | orchestrator | Monday 02 February 2026 05:06:50 +0000 (0:00:01.125) 0:01:52.671 ******* 2026-02-02 05:06:54.649224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 05:06:54.793805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 05:06:54.793912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 05:06:54.793978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 05:06:54.793993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-02 05:06:54.794079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 05:06:58.319053 | orchestrator | 2026-02-02 05:06:58.319980 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-02 05:06:58.320020 | orchestrator | Monday 02 February 2026 05:06:54 +0000 (0:00:04.777) 0:01:57.448 ******* 2026-02-02 05:06:58.320056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 05:06:58.320074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 05:06:58.320107 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:06:58.320161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 05:06:58.320190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 05:06:58.320224 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:06:58.320258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-02 05:07:10.598682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-02 05:07:10.598825 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:10.598845 | orchestrator | 2026-02-02 05:07:10.598858 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-02 05:07:10.598870 | orchestrator | Monday 02 February 2026 05:06:58 +0000 (0:00:03.611) 0:02:01.060 ******* 2026-02-02 05:07:10.598883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 05:07:10.598914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 05:07:10.598927 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:10.598938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 05:07:10.598969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 05:07:10.598990 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:10.599001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 05:07:10.599013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-02 05:07:10.599024 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:10.599035 | orchestrator | 2026-02-02 05:07:10.599047 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-02 05:07:10.599058 | orchestrator | Monday 02 February 2026 05:07:02 +0000 (0:00:03.897) 0:02:04.958 ******* 2026-02-02 05:07:10.599069 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:07:10.599081 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:07:10.599092 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:07:10.599103 | orchestrator | 2026-02-02 05:07:10.599113 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-02 05:07:10.599125 | orchestrator | Monday 02 February 2026 05:07:03 +0000 (0:00:01.189) 0:02:06.147 ******* 2026-02-02 05:07:10.599142 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:07:10.599160 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:07:10.599179 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:07:10.599197 | orchestrator | 2026-02-02 05:07:10.599213 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-02 05:07:10.599227 | orchestrator | Monday 02 February 2026 05:07:05 +0000 (0:00:02.106) 0:02:08.254 ******* 2026-02-02 05:07:10.599240 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:10.599253 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:10.599266 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:10.599278 | orchestrator | 2026-02-02 05:07:10.599290 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-02 05:07:10.599303 | orchestrator | Monday 02 February 2026 05:07:05 +0000 (0:00:00.326) 0:02:08.580 ******* 2026-02-02 05:07:10.599348 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:07:10.599359 | orchestrator | 2026-02-02 05:07:10.599370 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-02 05:07:10.599381 | orchestrator | Monday 02 February 2026 05:07:07 +0000 (0:00:01.155) 0:02:09.736 ******* 2026-02-02 05:07:10.599400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:07:10.599423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:07:21.088280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:07:21.088401 | orchestrator | 2026-02-02 05:07:21.088413 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-02 05:07:21.088421 | orchestrator | Monday 02 February 2026 05:07:10 +0000 (0:00:03.509) 0:02:13.246 ******* 2026-02-02 05:07:21.088428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:07:21.088435 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:21.088443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:07:21.088450 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:21.088469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:07:21.088491 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:21.088498 | orchestrator | 2026-02-02 05:07:21.088505 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-02 05:07:21.088511 | orchestrator | Monday 02 February 2026 05:07:11 +0000 (0:00:00.699) 0:02:13.945 ******* 2026-02-02 05:07:21.088519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:21.088528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:21.088536 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:21.088559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:21.088566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:21.088573 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:21.088579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:21.088586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:21.088592 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:21.088599 | orchestrator | 2026-02-02 05:07:21.088605 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-02 05:07:21.088612 | orchestrator | Monday 02 February 2026 05:07:11 +0000 (0:00:00.704) 0:02:14.650 ******* 2026-02-02 05:07:21.088618 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:07:21.088625 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:07:21.088632 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:07:21.088687 | orchestrator | 2026-02-02 05:07:21.088694 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-02 05:07:21.088700 | orchestrator | Monday 02 February 2026 05:07:13 +0000 (0:00:01.198) 0:02:15.848 ******* 2026-02-02 05:07:21.088706 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:07:21.088712 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:07:21.088719 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:07:21.088725 | orchestrator | 2026-02-02 05:07:21.088731 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-02 05:07:21.088737 | orchestrator | Monday 02 February 2026 05:07:15 +0000 (0:00:02.542) 0:02:18.391 ******* 2026-02-02 05:07:21.088743 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:21.088750 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:21.088756 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:21.088763 | orchestrator | 2026-02-02 05:07:21.088769 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-02 05:07:21.088775 | orchestrator | Monday 02 February 2026 05:07:16 +0000 (0:00:00.340) 0:02:18.732 ******* 2026-02-02 05:07:21.088781 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:07:21.088788 | orchestrator | 2026-02-02 05:07:21.088801 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-02 05:07:21.088807 | orchestrator | Monday 02 February 2026 05:07:17 +0000 (0:00:00.975) 0:02:19.707 ******* 2026-02-02 05:07:21.088828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 05:07:21.758286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 05:07:21.758492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-02 05:07:21.758514 | orchestrator | 2026-02-02 05:07:21.758527 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-02 05:07:21.758539 | orchestrator | Monday 02 February 2026 05:07:21 +0000 (0:00:04.023) 0:02:23.731 ******* 2026-02-02 05:07:21.758557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 05:07:21.758576 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:21.758597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 05:07:27.134742 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:27.134851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-02 05:07:27.134899 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:27.134913 | orchestrator | 2026-02-02 05:07:27.134926 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-02 05:07:27.134938 | orchestrator | Monday 02 February 2026 05:07:21 +0000 (0:00:00.680) 0:02:24.412 ******* 2026-02-02 05:07:27.134951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-02 05:07:27.134966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 05:07:27.134998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-02 05:07:27.135012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 05:07:27.135035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-02 05:07:27.135048 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:27.135077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-02 05:07:27.135177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 05:07:27.135209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-02 05:07:27.135232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 05:07:27.135255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-02 05:07:27.135276 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:27.135299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-02 05:07:27.135345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 05:07:27.135359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-02 05:07:27.135373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-02 05:07:27.135386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-02 05:07:27.135398 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:27.135412 | orchestrator | 2026-02-02 05:07:27.135424 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-02 05:07:27.135438 | orchestrator | Monday 02 February 2026 05:07:22 +0000 (0:00:01.035) 0:02:25.448 ******* 2026-02-02 05:07:27.135451 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:07:27.135464 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:07:27.135477 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:07:27.135490 | orchestrator | 2026-02-02 05:07:27.135503 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-02 05:07:27.135517 | orchestrator | Monday 02 February 2026 05:07:24 +0000 (0:00:01.592) 0:02:27.040 ******* 2026-02-02 05:07:27.135538 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:07:27.135551 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:07:27.135564 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:07:27.135576 | orchestrator | 2026-02-02 05:07:27.135589 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-02 05:07:27.135601 | orchestrator | Monday 02 February 2026 05:07:26 +0000 (0:00:02.213) 0:02:29.253 ******* 2026-02-02 05:07:27.135614 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:27.135633 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:27.135650 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:27.135669 | orchestrator | 2026-02-02 05:07:27.135686 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-02 05:07:27.135703 | orchestrator | Monday 02 February 2026 05:07:26 +0000 (0:00:00.340) 0:02:29.593 ******* 2026-02-02 05:07:27.135737 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:34.146485 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:34.146617 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:34.146646 | orchestrator | 2026-02-02 05:07:34.146671 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-02 05:07:34.146694 | orchestrator | Monday 02 February 2026 05:07:27 +0000 (0:00:00.314) 0:02:29.908 ******* 2026-02-02 05:07:34.146716 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:07:34.146736 | orchestrator | 2026-02-02 05:07:34.146757 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-02 05:07:34.146777 | orchestrator | Monday 02 February 2026 05:07:28 +0000 (0:00:01.300) 0:02:31.209 ******* 2026-02-02 05:07:34.146804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 05:07:34.146852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 05:07:34.146875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 05:07:34.146924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 05:07:34.146971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 05:07:34.146993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 05:07:34.147023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-02 05:07:34.147047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 05:07:34.147081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 05:07:34.147100 | orchestrator | 2026-02-02 05:07:34.147121 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-02 05:07:34.147143 | orchestrator | Monday 02 February 2026 05:07:32 +0000 (0:00:04.223) 0:02:35.432 ******* 2026-02-02 05:07:34.147177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 05:07:35.036409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 05:07:35.036527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 05:07:35.036543 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:35.036558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 05:07:35.036589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 05:07:35.036600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 05:07:35.036610 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:35.036639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-02 05:07:35.036656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-02 05:07:35.036666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-02 05:07:35.036688 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:35.036706 | orchestrator | 2026-02-02 05:07:35.036725 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-02 05:07:35.036743 | orchestrator | Monday 02 February 2026 05:07:34 +0000 (0:00:01.365) 0:02:36.797 ******* 2026-02-02 05:07:35.036762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-02 05:07:35.036784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-02 05:07:35.036804 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:35.036817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-02 05:07:35.036827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-02 05:07:35.036837 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:35.036847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-02 05:07:35.036857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-02 05:07:35.036866 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:35.036876 | orchestrator | 2026-02-02 05:07:35.036886 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-02 05:07:35.036904 | orchestrator | Monday 02 February 2026 05:07:35 +0000 (0:00:00.882) 0:02:37.680 ******* 2026-02-02 05:07:45.035367 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:07:45.035478 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:07:45.035492 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:07:45.035503 | orchestrator | 2026-02-02 05:07:45.035515 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-02 05:07:45.035527 | orchestrator | Monday 02 February 2026 05:07:36 +0000 (0:00:01.316) 0:02:38.996 ******* 2026-02-02 05:07:45.035537 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:07:45.035547 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:07:45.035557 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:07:45.035566 | orchestrator | 2026-02-02 05:07:45.035576 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-02 05:07:45.035586 | orchestrator | Monday 02 February 2026 05:07:38 +0000 (0:00:02.239) 0:02:41.236 ******* 2026-02-02 05:07:45.035596 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:45.035606 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:45.035615 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:45.035646 | orchestrator | 2026-02-02 05:07:45.035657 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-02 05:07:45.035667 | orchestrator | Monday 02 February 2026 05:07:39 +0000 (0:00:00.633) 0:02:41.870 ******* 2026-02-02 05:07:45.035676 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:07:45.035686 | orchestrator | 2026-02-02 05:07:45.035708 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-02 05:07:45.035718 | orchestrator | Monday 02 February 2026 05:07:40 +0000 (0:00:01.046) 0:02:42.917 ******* 2026-02-02 05:07:45.035732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:07:45.035749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 05:07:45.035761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:07:45.035790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 05:07:45.035816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:07:45.035827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 05:07:45.035837 | orchestrator | 2026-02-02 05:07:45.035847 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-02 05:07:45.035857 | orchestrator | Monday 02 February 2026 05:07:44 +0000 (0:00:03.939) 0:02:46.856 ******* 2026-02-02 05:07:45.035868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:07:45.035885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 05:07:54.744709 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:54.744829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:07:54.744848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 05:07:54.744860 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:54.744872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:07:54.744882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-02 05:07:54.744891 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:54.744901 | orchestrator | 2026-02-02 05:07:54.744930 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-02 05:07:54.744941 | orchestrator | Monday 02 February 2026 05:07:45 +0000 (0:00:00.832) 0:02:47.688 ******* 2026-02-02 05:07:54.744968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:54.744983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:54.744994 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:54.745005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:54.745021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:54.745032 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:54.745041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:54.745053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:54.745062 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:54.745072 | orchestrator | 2026-02-02 05:07:54.745082 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-02 05:07:54.745092 | orchestrator | Monday 02 February 2026 05:07:46 +0000 (0:00:00.999) 0:02:48.688 ******* 2026-02-02 05:07:54.745102 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:07:54.745113 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:07:54.745123 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:07:54.745133 | orchestrator | 2026-02-02 05:07:54.745143 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-02 05:07:54.745154 | orchestrator | Monday 02 February 2026 05:07:47 +0000 (0:00:01.577) 0:02:50.266 ******* 2026-02-02 05:07:54.745164 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:07:54.745175 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:07:54.745185 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:07:54.745195 | orchestrator | 2026-02-02 05:07:54.745205 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-02 05:07:54.745215 | orchestrator | Monday 02 February 2026 05:07:49 +0000 (0:00:02.176) 0:02:52.442 ******* 2026-02-02 05:07:54.745225 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:07:54.745235 | orchestrator | 2026-02-02 05:07:54.745245 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-02 05:07:54.745255 | orchestrator | Monday 02 February 2026 05:07:50 +0000 (0:00:01.129) 0:02:53.571 ******* 2026-02-02 05:07:54.745266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:07:54.745288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:07:54.745330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 05:07:55.491806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 05:07:55.491904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:07:55.491915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:07:55.491936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 05:07:55.491943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 05:07:55.491965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:07:55.491972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:07:55.491978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 05:07:55.491984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 05:07:55.491994 | orchestrator | 2026-02-02 05:07:55.492001 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-02 05:07:55.492007 | orchestrator | Monday 02 February 2026 05:07:54 +0000 (0:00:03.937) 0:02:57.509 ******* 2026-02-02 05:07:55.492015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:07:55.492025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:07:56.579469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 05:07:56.579576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 05:07:56.579600 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:56.579624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:07:56.579672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:07:56.579687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 05:07:56.579719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 05:07:56.579738 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:07:56.579749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:07:56.579761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:07:56.579779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-02 05:07:56.579791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-02 05:07:56.579803 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:07:56.579814 | orchestrator | 2026-02-02 05:07:56.579826 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-02 05:07:56.579838 | orchestrator | Monday 02 February 2026 05:07:55 +0000 (0:00:00.747) 0:02:58.256 ******* 2026-02-02 05:07:56.579850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:56.579865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:56.579878 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:07:56.579889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:07:56.579912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:08:08.053373 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:08.053492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:08:08.053512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:08:08.053526 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:08.053538 | orchestrator | 2026-02-02 05:08:08.053550 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-02 05:08:08.053562 | orchestrator | Monday 02 February 2026 05:07:56 +0000 (0:00:00.969) 0:02:59.226 ******* 2026-02-02 05:08:08.053573 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:08:08.053584 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:08:08.053617 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:08:08.053628 | orchestrator | 2026-02-02 05:08:08.053639 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-02 05:08:08.053650 | orchestrator | Monday 02 February 2026 05:07:58 +0000 (0:00:01.650) 0:03:00.877 ******* 2026-02-02 05:08:08.053661 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:08:08.053672 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:08:08.053682 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:08:08.053693 | orchestrator | 2026-02-02 05:08:08.053704 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-02 05:08:08.053715 | orchestrator | Monday 02 February 2026 05:08:00 +0000 (0:00:02.209) 0:03:03.086 ******* 2026-02-02 05:08:08.053726 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:08:08.053736 | orchestrator | 2026-02-02 05:08:08.053747 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-02 05:08:08.053758 | orchestrator | Monday 02 February 2026 05:08:01 +0000 (0:00:01.507) 0:03:04.594 ******* 2026-02-02 05:08:08.053769 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:08:08.053780 | orchestrator | 2026-02-02 05:08:08.053791 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-02 05:08:08.053801 | orchestrator | Monday 02 February 2026 05:08:05 +0000 (0:00:03.590) 0:03:08.185 ******* 2026-02-02 05:08:08.053817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:08:08.053864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 05:08:08.053882 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:08.053906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:08:08.053921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 05:08:08.053935 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:08.053959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:08:10.693664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 05:08:10.693758 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:10.693772 | orchestrator | 2026-02-02 05:08:10.693782 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-02 05:08:10.693792 | orchestrator | Monday 02 February 2026 05:08:08 +0000 (0:00:02.509) 0:03:10.694 ******* 2026-02-02 05:08:10.693843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:08:10.693856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 05:08:10.693865 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:10.693897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:08:10.693929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 05:08:10.693938 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:10.693948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:08:10.693974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-02 05:08:20.750593 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:20.750689 | orchestrator | 2026-02-02 05:08:20.750700 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-02 05:08:20.750708 | orchestrator | Monday 02 February 2026 05:08:10 +0000 (0:00:02.642) 0:03:13.336 ******* 2026-02-02 05:08:20.750717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 05:08:20.750728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 05:08:20.750735 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:20.750742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 05:08:20.750749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 05:08:20.750756 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:20.750762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 05:08:20.750799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-02 05:08:20.750807 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:20.750813 | orchestrator | 2026-02-02 05:08:20.750819 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-02 05:08:20.750826 | orchestrator | Monday 02 February 2026 05:08:13 +0000 (0:00:03.283) 0:03:16.620 ******* 2026-02-02 05:08:20.750832 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:08:20.750852 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:08:20.750859 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:08:20.750865 | orchestrator | 2026-02-02 05:08:20.750871 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-02 05:08:20.750877 | orchestrator | Monday 02 February 2026 05:08:15 +0000 (0:00:01.758) 0:03:18.378 ******* 2026-02-02 05:08:20.750883 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:20.750890 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:20.750896 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:20.750902 | orchestrator | 2026-02-02 05:08:20.750908 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-02 05:08:20.750914 | orchestrator | Monday 02 February 2026 05:08:17 +0000 (0:00:01.586) 0:03:19.964 ******* 2026-02-02 05:08:20.750920 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:20.750927 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:20.750933 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:20.750939 | orchestrator | 2026-02-02 05:08:20.750945 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-02 05:08:20.750951 | orchestrator | Monday 02 February 2026 05:08:17 +0000 (0:00:00.364) 0:03:20.329 ******* 2026-02-02 05:08:20.750957 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:08:20.750963 | orchestrator | 2026-02-02 05:08:20.750970 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-02 05:08:20.750976 | orchestrator | Monday 02 February 2026 05:08:19 +0000 (0:00:01.456) 0:03:21.786 ******* 2026-02-02 05:08:20.750983 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 05:08:20.750991 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 05:08:20.751003 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 05:08:20.751010 | orchestrator | 2026-02-02 05:08:20.751016 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-02 05:08:20.751024 | orchestrator | Monday 02 February 2026 05:08:20 +0000 (0:00:01.511) 0:03:23.297 ******* 2026-02-02 05:08:20.751038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 05:08:30.551275 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:30.551500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 05:08:30.551523 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:30.551536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 05:08:30.551570 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:30.551582 | orchestrator | 2026-02-02 05:08:30.551594 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-02 05:08:30.551606 | orchestrator | Monday 02 February 2026 05:08:21 +0000 (0:00:00.419) 0:03:23.717 ******* 2026-02-02 05:08:30.551618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-02 05:08:30.551631 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:30.551642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-02 05:08:30.551653 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:30.551664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-02 05:08:30.551675 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:30.551686 | orchestrator | 2026-02-02 05:08:30.551697 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-02 05:08:30.551707 | orchestrator | Monday 02 February 2026 05:08:22 +0000 (0:00:01.015) 0:03:24.732 ******* 2026-02-02 05:08:30.551718 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:30.551729 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:30.551739 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:30.551750 | orchestrator | 2026-02-02 05:08:30.551761 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-02 05:08:30.551771 | orchestrator | Monday 02 February 2026 05:08:22 +0000 (0:00:00.461) 0:03:25.194 ******* 2026-02-02 05:08:30.551782 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:30.551793 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:30.551804 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:30.551816 | orchestrator | 2026-02-02 05:08:30.551844 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-02 05:08:30.551857 | orchestrator | Monday 02 February 2026 05:08:24 +0000 (0:00:01.782) 0:03:26.977 ******* 2026-02-02 05:08:30.551871 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:30.551885 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:30.551898 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:30.551910 | orchestrator | 2026-02-02 05:08:30.551923 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-02 05:08:30.551936 | orchestrator | Monday 02 February 2026 05:08:24 +0000 (0:00:00.663) 0:03:27.641 ******* 2026-02-02 05:08:30.551949 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:08:30.551962 | orchestrator | 2026-02-02 05:08:30.551975 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-02 05:08:30.551988 | orchestrator | Monday 02 February 2026 05:08:26 +0000 (0:00:01.285) 0:03:28.926 ******* 2026-02-02 05:08:30.552027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:08:30.552053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:30.552069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-02 05:08:30.552090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-02 05:08:30.552116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:30.984701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:30.984843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:30.984861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 05:08:30.984873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 05:08:30.984896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:30.984906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-02 05:08:30.984934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:30.984949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:30.984961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 05:08:30.984972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 05:08:30.984985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:08:30.985002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:31.090849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-02 05:08:31.090962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-02 05:08:31.090998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:08:31.091014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:31.091071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:31.091086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:31.091099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-02 05:08:31.091112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:31.091130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-02 05:08:31.091157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 05:08:31.276716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:31.276815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 05:08:31.276854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:31.276882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:31.276894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-02 05:08:31.276932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:31.276962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:31.276973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 05:08:31.276998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:31.277007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 05:08:31.277021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 05:08:31.277038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 05:08:31.277054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:32.510560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-02 05:08:32.510688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:32.510720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:32.510761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 05:08:32.510799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 05:08:32.510812 | orchestrator | 2026-02-02 05:08:32.510825 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-02 05:08:32.510837 | orchestrator | Monday 02 February 2026 05:08:31 +0000 (0:00:05.110) 0:03:34.037 ******* 2026-02-02 05:08:32.510870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:08:32.510884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:32.510903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-02 05:08:32.510924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-02 05:08:32.510945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:32.627643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:32.627743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:32.627759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 05:08:32.627809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 05:08:32.627848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:32.627882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:08:32.627897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-02 05:08:32.627910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:32.627935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:32.627948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-02 05:08:32.627960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:32.627980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-02 05:08:32.732529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 05:08:32.732673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:32.732693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 05:08:32.732706 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:32.732721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:32.732735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:32.732768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:08:32.732783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:32.732804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 05:08:32.732817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-02 05:08:32.732830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 05:08:32.732883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-02 05:08:32.929340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:32.929432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:32.929443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:32.929451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-02 05:08:32.929460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:32.929467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:32.929486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-02 05:08:32.929511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:32.929519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-02 05:08:32.929527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 05:08:32.929535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:32.929542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 05:08:32.929560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-02 05:08:43.466182 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:43.466272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-02 05:08:43.466285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-02 05:08:43.466296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-02 05:08:43.466338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-02 05:08:43.466363 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:43.466370 | orchestrator | 2026-02-02 05:08:43.466377 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-02 05:08:43.466384 | orchestrator | Monday 02 February 2026 05:08:32 +0000 (0:00:01.546) 0:03:35.583 ******* 2026-02-02 05:08:43.466392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:08:43.466403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:08:43.466416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:08:43.466443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:08:43.466455 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:43.466466 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:43.466481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:08:43.466492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:08:43.466503 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:43.466515 | orchestrator | 2026-02-02 05:08:43.466526 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-02 05:08:43.466537 | orchestrator | Monday 02 February 2026 05:08:34 +0000 (0:00:01.712) 0:03:37.296 ******* 2026-02-02 05:08:43.466544 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:08:43.466551 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:08:43.466557 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:08:43.466563 | orchestrator | 2026-02-02 05:08:43.466569 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-02 05:08:43.466575 | orchestrator | Monday 02 February 2026 05:08:36 +0000 (0:00:01.509) 0:03:38.806 ******* 2026-02-02 05:08:43.466581 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:08:43.466587 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:08:43.466594 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:08:43.466600 | orchestrator | 2026-02-02 05:08:43.466606 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-02 05:08:43.466612 | orchestrator | Monday 02 February 2026 05:08:38 +0000 (0:00:02.146) 0:03:40.952 ******* 2026-02-02 05:08:43.466618 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:08:43.466624 | orchestrator | 2026-02-02 05:08:43.466630 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-02 05:08:43.466637 | orchestrator | Monday 02 February 2026 05:08:39 +0000 (0:00:01.255) 0:03:42.208 ******* 2026-02-02 05:08:43.466644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 05:08:43.466658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 05:08:43.466675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 05:08:56.256109 | orchestrator | 2026-02-02 05:08:56.256256 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-02 05:08:56.256277 | orchestrator | Monday 02 February 2026 05:08:43 +0000 (0:00:03.905) 0:03:46.113 ******* 2026-02-02 05:08:56.256295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 05:08:56.256412 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:56.256428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 05:08:56.256440 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:56.256452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 05:08:56.256467 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:56.256485 | orchestrator | 2026-02-02 05:08:56.256504 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-02 05:08:56.256523 | orchestrator | Monday 02 February 2026 05:08:44 +0000 (0:00:00.546) 0:03:46.660 ******* 2026-02-02 05:08:56.256588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 05:08:56.256640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 05:08:56.256745 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:56.256769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 05:08:56.256789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 05:08:56.256807 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:56.256824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 05:08:56.256861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 05:08:56.256880 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:08:56.256899 | orchestrator | 2026-02-02 05:08:56.256918 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-02 05:08:56.256936 | orchestrator | Monday 02 February 2026 05:08:45 +0000 (0:00:01.137) 0:03:47.797 ******* 2026-02-02 05:08:56.256955 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:08:56.256974 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:08:56.256991 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:08:56.257010 | orchestrator | 2026-02-02 05:08:56.257029 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-02 05:08:56.257048 | orchestrator | Monday 02 February 2026 05:08:46 +0000 (0:00:01.236) 0:03:49.033 ******* 2026-02-02 05:08:56.257067 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:08:56.257086 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:08:56.257104 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:08:56.257120 | orchestrator | 2026-02-02 05:08:56.257132 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-02 05:08:56.257143 | orchestrator | Monday 02 February 2026 05:08:48 +0000 (0:00:02.171) 0:03:51.204 ******* 2026-02-02 05:08:56.257153 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:08:56.257164 | orchestrator | 2026-02-02 05:08:56.257175 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-02 05:08:56.257185 | orchestrator | Monday 02 February 2026 05:08:50 +0000 (0:00:01.680) 0:03:52.885 ******* 2026-02-02 05:08:56.257198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:08:56.257234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:08:56.405661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:08:56.405750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:08:56.405764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:08:56.405774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 05:08:56.405813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:08:56.405841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:08:56.405851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 05:08:56.405860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:08:56.405870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:08:56.405883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 05:08:56.405899 | orchestrator | 2026-02-02 05:08:56.405910 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-02 05:08:56.405926 | orchestrator | Monday 02 February 2026 05:08:56 +0000 (0:00:06.168) 0:03:59.053 ******* 2026-02-02 05:08:57.096932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:08:57.097036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:08:57.097053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:08:57.097067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 05:08:57.097135 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:08:57.097171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:08:57.097185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:08:57.097197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:08:57.097209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 05:08:57.097220 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:08:57.097237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:08:57.097266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:09:12.935867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-02 05:09:12.935960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-02 05:09:12.935970 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:09:12.935979 | orchestrator | 2026-02-02 05:09:12.935987 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-02 05:09:12.935994 | orchestrator | Monday 02 February 2026 05:08:57 +0000 (0:00:00.822) 0:03:59.875 ******* 2026-02-02 05:09:12.936001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:09:12.936011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:09:12.936019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:09:12.936044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:09:12.936050 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:09:12.936068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:09:12.936074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:09:12.936080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:09:12.936087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:09:12.936093 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:09:12.936099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:09:12.936118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:09:12.936124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:09:12.936131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:09:12.936137 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:09:12.936143 | orchestrator | 2026-02-02 05:09:12.936149 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-02 05:09:12.936156 | orchestrator | Monday 02 February 2026 05:08:58 +0000 (0:00:01.738) 0:04:01.613 ******* 2026-02-02 05:09:12.936162 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:09:12.936168 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:09:12.936174 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:09:12.936180 | orchestrator | 2026-02-02 05:09:12.936186 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-02 05:09:12.936193 | orchestrator | Monday 02 February 2026 05:09:01 +0000 (0:00:02.226) 0:04:03.840 ******* 2026-02-02 05:09:12.936198 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:09:12.936205 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:09:12.936211 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:09:12.936217 | orchestrator | 2026-02-02 05:09:12.936223 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-02 05:09:12.936229 | orchestrator | Monday 02 February 2026 05:09:03 +0000 (0:00:02.217) 0:04:06.058 ******* 2026-02-02 05:09:12.936241 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:09:12.936247 | orchestrator | 2026-02-02 05:09:12.936253 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-02 05:09:12.936259 | orchestrator | Monday 02 February 2026 05:09:05 +0000 (0:00:02.014) 0:04:08.072 ******* 2026-02-02 05:09:12.936266 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-02 05:09:12.936273 | orchestrator | 2026-02-02 05:09:12.936279 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-02 05:09:12.936285 | orchestrator | Monday 02 February 2026 05:09:06 +0000 (0:00:00.946) 0:04:09.019 ******* 2026-02-02 05:09:12.936293 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-02 05:09:12.936334 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-02 05:09:12.936342 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-02 05:09:12.936348 | orchestrator | 2026-02-02 05:09:12.936355 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-02 05:09:12.936362 | orchestrator | Monday 02 February 2026 05:09:11 +0000 (0:00:04.941) 0:04:13.961 ******* 2026-02-02 05:09:12.936369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 05:09:12.936381 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:09:29.539895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 05:09:29.540040 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:09:29.540061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 05:09:29.540098 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:09:29.540110 | orchestrator | 2026-02-02 05:09:29.540123 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-02 05:09:29.540136 | orchestrator | Monday 02 February 2026 05:09:12 +0000 (0:00:01.620) 0:04:15.581 ******* 2026-02-02 05:09:29.540148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 05:09:29.540163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 05:09:29.540176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 05:09:29.540187 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:09:29.540199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 05:09:29.540215 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:09:29.540256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 05:09:29.540285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-02 05:09:29.540336 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:09:29.540354 | orchestrator | 2026-02-02 05:09:29.540369 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-02 05:09:29.540385 | orchestrator | Monday 02 February 2026 05:09:14 +0000 (0:00:01.936) 0:04:17.518 ******* 2026-02-02 05:09:29.540403 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:09:29.540423 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:09:29.540441 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:09:29.540459 | orchestrator | 2026-02-02 05:09:29.540479 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-02 05:09:29.540499 | orchestrator | Monday 02 February 2026 05:09:17 +0000 (0:00:03.064) 0:04:20.583 ******* 2026-02-02 05:09:29.540519 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:09:29.540538 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:09:29.540557 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:09:29.540577 | orchestrator | 2026-02-02 05:09:29.540601 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-02 05:09:29.540631 | orchestrator | Monday 02 February 2026 05:09:20 +0000 (0:00:02.941) 0:04:23.524 ******* 2026-02-02 05:09:29.540656 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-02 05:09:29.540675 | orchestrator | 2026-02-02 05:09:29.540693 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-02 05:09:29.540712 | orchestrator | Monday 02 February 2026 05:09:22 +0000 (0:00:01.385) 0:04:24.910 ******* 2026-02-02 05:09:29.540776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 05:09:29.540800 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:09:29.540820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 05:09:29.540838 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:09:29.540860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 05:09:29.540880 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:09:29.540900 | orchestrator | 2026-02-02 05:09:29.540918 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-02 05:09:29.540935 | orchestrator | Monday 02 February 2026 05:09:23 +0000 (0:00:01.421) 0:04:26.331 ******* 2026-02-02 05:09:29.540946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 05:09:29.540958 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:09:29.540979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 05:09:29.540990 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:09:29.541002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-02 05:09:29.541013 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:09:29.541033 | orchestrator | 2026-02-02 05:09:29.541045 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-02 05:09:29.541055 | orchestrator | Monday 02 February 2026 05:09:25 +0000 (0:00:01.561) 0:04:27.893 ******* 2026-02-02 05:09:29.541066 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:09:29.541077 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:09:29.541088 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:09:29.541098 | orchestrator | 2026-02-02 05:09:29.541115 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-02 05:09:29.541133 | orchestrator | Monday 02 February 2026 05:09:27 +0000 (0:00:01.802) 0:04:29.696 ******* 2026-02-02 05:09:29.541150 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:09:29.541168 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:09:29.541186 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:09:29.541206 | orchestrator | 2026-02-02 05:09:29.541225 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-02 05:09:29.541245 | orchestrator | Monday 02 February 2026 05:09:29 +0000 (0:00:02.488) 0:04:32.184 ******* 2026-02-02 05:09:51.952544 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:09:51.952650 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:09:51.952663 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:09:51.952673 | orchestrator | 2026-02-02 05:09:51.952683 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-02 05:09:51.952693 | orchestrator | Monday 02 February 2026 05:09:33 +0000 (0:00:03.617) 0:04:35.802 ******* 2026-02-02 05:09:51.952707 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-02 05:09:51.952718 | orchestrator | 2026-02-02 05:09:51.952728 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-02 05:09:51.952737 | orchestrator | Monday 02 February 2026 05:09:34 +0000 (0:00:01.585) 0:04:37.388 ******* 2026-02-02 05:09:51.952749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 05:09:51.952761 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:09:51.952771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 05:09:51.952781 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:09:51.952790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 05:09:51.952799 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:09:51.952807 | orchestrator | 2026-02-02 05:09:51.952816 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-02 05:09:51.952827 | orchestrator | Monday 02 February 2026 05:09:36 +0000 (0:00:01.585) 0:04:38.974 ******* 2026-02-02 05:09:51.952871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 05:09:51.952882 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:09:51.952891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 05:09:51.952899 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:09:51.952924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-02 05:09:51.952934 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:09:51.952943 | orchestrator | 2026-02-02 05:09:51.952952 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-02 05:09:51.952960 | orchestrator | Monday 02 February 2026 05:09:37 +0000 (0:00:01.468) 0:04:40.442 ******* 2026-02-02 05:09:51.952969 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:09:51.952978 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:09:51.952986 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:09:51.952995 | orchestrator | 2026-02-02 05:09:51.953003 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-02 05:09:51.953012 | orchestrator | Monday 02 February 2026 05:09:39 +0000 (0:00:02.166) 0:04:42.608 ******* 2026-02-02 05:09:51.953021 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:09:51.953029 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:09:51.953038 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:09:51.953046 | orchestrator | 2026-02-02 05:09:51.953055 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-02 05:09:51.953064 | orchestrator | Monday 02 February 2026 05:09:42 +0000 (0:00:02.507) 0:04:45.116 ******* 2026-02-02 05:09:51.953072 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:09:51.953081 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:09:51.953090 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:09:51.953100 | orchestrator | 2026-02-02 05:09:51.953111 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-02 05:09:51.953121 | orchestrator | Monday 02 February 2026 05:09:46 +0000 (0:00:03.853) 0:04:48.970 ******* 2026-02-02 05:09:51.953131 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:09:51.953141 | orchestrator | 2026-02-02 05:09:51.953151 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-02 05:09:51.953161 | orchestrator | Monday 02 February 2026 05:09:47 +0000 (0:00:01.687) 0:04:50.658 ******* 2026-02-02 05:09:51.953173 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 05:09:51.953197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 05:09:51.953211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 05:09:51.953229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 05:09:52.771884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:09:52.771987 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 05:09:52.772032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 05:09:52.772061 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-02 05:09:52.772074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 05:09:52.772103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 05:09:52.772117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 05:09:52.772128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:09:52.772147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 05:09:52.772170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 05:09:52.772182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:09:52.772194 | orchestrator | 2026-02-02 05:09:52.772207 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-02 05:09:52.772219 | orchestrator | Monday 02 February 2026 05:09:52 +0000 (0:00:04.139) 0:04:54.797 ******* 2026-02-02 05:09:52.772239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 05:09:52.920360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 05:09:52.920478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 05:09:52.920495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 05:09:52.920522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:09:52.920535 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:09:52.920549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 05:09:52.920565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 05:09:52.920595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 05:09:52.920614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 05:09:52.920658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-02 05:09:52.920672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:09:52.920684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-02 05:09:52.920695 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:09:52.920713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-02 05:10:06.271210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-02 05:10:06.271424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-02 05:10:06.271458 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:06.271478 | orchestrator | 2026-02-02 05:10:06.271499 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-02 05:10:06.271518 | orchestrator | Monday 02 February 2026 05:09:52 +0000 (0:00:00.777) 0:04:55.574 ******* 2026-02-02 05:10:06.271536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 05:10:06.271558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 05:10:06.271579 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:06.271622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 05:10:06.271642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 05:10:06.271661 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:06.271672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 05:10:06.271684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-02 05:10:06.271695 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:06.271706 | orchestrator | 2026-02-02 05:10:06.271717 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-02 05:10:06.271730 | orchestrator | Monday 02 February 2026 05:09:54 +0000 (0:00:01.729) 0:04:57.304 ******* 2026-02-02 05:10:06.271744 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:10:06.271763 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:10:06.271783 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:10:06.271801 | orchestrator | 2026-02-02 05:10:06.271814 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-02 05:10:06.271828 | orchestrator | Monday 02 February 2026 05:09:55 +0000 (0:00:01.261) 0:04:58.566 ******* 2026-02-02 05:10:06.271840 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:10:06.271879 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:10:06.271898 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:10:06.271917 | orchestrator | 2026-02-02 05:10:06.271937 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-02 05:10:06.271956 | orchestrator | Monday 02 February 2026 05:09:58 +0000 (0:00:02.211) 0:05:00.778 ******* 2026-02-02 05:10:06.271976 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:10:06.271996 | orchestrator | 2026-02-02 05:10:06.272017 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-02 05:10:06.272034 | orchestrator | Monday 02 February 2026 05:09:59 +0000 (0:00:01.709) 0:05:02.487 ******* 2026-02-02 05:10:06.272082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:10:06.272098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:10:06.272117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:10:06.272131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:10:06.272163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:10:08.798807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:10:08.798912 | orchestrator | 2026-02-02 05:10:08.798947 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-02 05:10:08.798962 | orchestrator | Monday 02 February 2026 05:10:06 +0000 (0:00:06.426) 0:05:08.914 ******* 2026-02-02 05:10:08.798976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:10:08.799011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 05:10:08.799025 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:08.799136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:10:08.799185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 05:10:08.799199 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:08.799210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:10:08.799231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 05:10:08.799243 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:08.799254 | orchestrator | 2026-02-02 05:10:08.799266 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-02 05:10:08.799277 | orchestrator | Monday 02 February 2026 05:10:07 +0000 (0:00:01.095) 0:05:10.010 ******* 2026-02-02 05:10:08.799290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:10:08.799334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-02 05:10:15.544554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-02 05:10:15.544669 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:15.544688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:10:15.544702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-02 05:10:15.544731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-02 05:10:15.544743 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:15.544773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:10:15.544785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-02 05:10:15.544796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-02 05:10:15.544807 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:15.544818 | orchestrator | 2026-02-02 05:10:15.544830 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-02 05:10:15.544842 | orchestrator | Monday 02 February 2026 05:10:08 +0000 (0:00:01.437) 0:05:11.447 ******* 2026-02-02 05:10:15.544853 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:15.544864 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:15.544874 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:15.544885 | orchestrator | 2026-02-02 05:10:15.544895 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-02 05:10:15.544906 | orchestrator | Monday 02 February 2026 05:10:09 +0000 (0:00:00.500) 0:05:11.948 ******* 2026-02-02 05:10:15.544916 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:15.544927 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:15.544938 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:15.544948 | orchestrator | 2026-02-02 05:10:15.544959 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-02 05:10:15.544970 | orchestrator | Monday 02 February 2026 05:10:10 +0000 (0:00:01.576) 0:05:13.524 ******* 2026-02-02 05:10:15.544981 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:10:15.544992 | orchestrator | 2026-02-02 05:10:15.545002 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-02 05:10:15.545013 | orchestrator | Monday 02 February 2026 05:10:12 +0000 (0:00:01.849) 0:05:15.374 ******* 2026-02-02 05:10:15.545050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-02 05:10:15.545066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 05:10:15.545092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:15.545108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:15.545122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 05:10:15.545136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-02 05:10:15.545150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 05:10:15.545190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:17.291575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:17.291691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 05:10:17.291706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-02 05:10:17.291717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 05:10:17.291726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:17.291734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:17.291756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 05:10:17.291787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:10:17.291797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-02 05:10:17.291806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:17.291813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:17.291821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 05:10:17.291846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:10:18.970579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-02 05:10:18.970680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:18.970697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:18.970709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 05:10:18.970723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:10:18.970794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-02 05:10:18.970809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:18.970820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:18.970831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 05:10:18.970843 | orchestrator | 2026-02-02 05:10:18.970856 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-02 05:10:18.970868 | orchestrator | Monday 02 February 2026 05:10:18 +0000 (0:00:05.306) 0:05:20.680 ******* 2026-02-02 05:10:18.970880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-02 05:10:18.970910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 05:10:18.970935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:19.124686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:19.124786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 05:10:19.124803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:10:19.124839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-02 05:10:19.124864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:19.124893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:19.124904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 05:10:19.124914 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:19.124927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-02 05:10:19.124938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 05:10:19.124955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:19.124966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:19.124981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 05:10:19.125000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:10:19.639837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-02 05:10:19.639956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:19.639973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-02 05:10:19.640000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:19.640011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-02 05:10:19.640038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 05:10:19.640050 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:19.640061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:19.640078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:19.640088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-02 05:10:19.640104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:10:19.640115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-02 05:10:19.640132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:28.699476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:10:28.699617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-02 05:10:28.699635 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:28.699649 | orchestrator | 2026-02-02 05:10:28.699662 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-02 05:10:28.699674 | orchestrator | Monday 02 February 2026 05:10:19 +0000 (0:00:01.606) 0:05:22.287 ******* 2026-02-02 05:10:28.699686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-02 05:10:28.699700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-02 05:10:28.699714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:10:28.699741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:10:28.699754 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:28.699765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-02 05:10:28.699777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-02 05:10:28.699788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:10:28.699817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:10:28.699839 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:28.699850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-02 05:10:28.699861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-02 05:10:28.699872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:10:28.699883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-02 05:10:28.699894 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:28.699905 | orchestrator | 2026-02-02 05:10:28.699916 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-02 05:10:28.699929 | orchestrator | Monday 02 February 2026 05:10:20 +0000 (0:00:01.337) 0:05:23.624 ******* 2026-02-02 05:10:28.699943 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:28.699956 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:28.699970 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:28.699983 | orchestrator | 2026-02-02 05:10:28.699997 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-02 05:10:28.700009 | orchestrator | Monday 02 February 2026 05:10:21 +0000 (0:00:00.536) 0:05:24.161 ******* 2026-02-02 05:10:28.700022 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:28.700034 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:28.700046 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:28.700058 | orchestrator | 2026-02-02 05:10:28.700070 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-02 05:10:28.700083 | orchestrator | Monday 02 February 2026 05:10:23 +0000 (0:00:01.682) 0:05:25.844 ******* 2026-02-02 05:10:28.700096 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:10:28.700108 | orchestrator | 2026-02-02 05:10:28.700121 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-02 05:10:28.700139 | orchestrator | Monday 02 February 2026 05:10:25 +0000 (0:00:01.865) 0:05:27.710 ******* 2026-02-02 05:10:28.700154 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:10:28.700187 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:10:42.571230 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:10:42.571420 | orchestrator | 2026-02-02 05:10:42.571440 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-02 05:10:42.571452 | orchestrator | Monday 02 February 2026 05:10:28 +0000 (0:00:03.639) 0:05:31.350 ******* 2026-02-02 05:10:42.571481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 05:10:42.571495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 05:10:42.571530 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:42.571543 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:42.571573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 05:10:42.571585 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:42.571596 | orchestrator | 2026-02-02 05:10:42.571607 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-02 05:10:42.571618 | orchestrator | Monday 02 February 2026 05:10:29 +0000 (0:00:00.853) 0:05:32.203 ******* 2026-02-02 05:10:42.571630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-02 05:10:42.571643 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:42.571660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-02 05:10:42.571678 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:42.571697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-02 05:10:42.571715 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:42.571733 | orchestrator | 2026-02-02 05:10:42.571753 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-02 05:10:42.571769 | orchestrator | Monday 02 February 2026 05:10:30 +0000 (0:00:00.723) 0:05:32.926 ******* 2026-02-02 05:10:42.571782 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:42.571795 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:42.571808 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:42.571820 | orchestrator | 2026-02-02 05:10:42.571833 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-02 05:10:42.571845 | orchestrator | Monday 02 February 2026 05:10:30 +0000 (0:00:00.461) 0:05:33.388 ******* 2026-02-02 05:10:42.571858 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:42.571871 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:42.571884 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:42.571897 | orchestrator | 2026-02-02 05:10:42.571907 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-02 05:10:42.571918 | orchestrator | Monday 02 February 2026 05:10:32 +0000 (0:00:02.000) 0:05:35.389 ******* 2026-02-02 05:10:42.571928 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:10:42.571949 | orchestrator | 2026-02-02 05:10:42.571959 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-02 05:10:42.571970 | orchestrator | Monday 02 February 2026 05:10:34 +0000 (0:00:01.592) 0:05:36.981 ******* 2026-02-02 05:10:42.571989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-02 05:10:42.572002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-02 05:10:42.572025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-02 05:10:43.299074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 05:10:43.299242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 05:10:43.299272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-02 05:10:43.299360 | orchestrator | 2026-02-02 05:10:43.299377 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-02 05:10:43.299389 | orchestrator | Monday 02 February 2026 05:10:42 +0000 (0:00:08.228) 0:05:45.209 ******* 2026-02-02 05:10:43.299423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-02 05:10:43.299442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 05:10:43.299462 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:43.299474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-02 05:10:43.299485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 05:10:43.299496 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:43.299514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-02 05:10:56.204168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-02 05:10:56.204266 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:56.204279 | orchestrator | 2026-02-02 05:10:56.204288 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-02 05:10:56.204339 | orchestrator | Monday 02 February 2026 05:10:43 +0000 (0:00:00.734) 0:05:45.944 ******* 2026-02-02 05:10:56.204349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-02 05:10:56.204360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-02 05:10:56.204369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 05:10:56.204377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 05:10:56.204385 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:56.204392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-02 05:10:56.204400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-02 05:10:56.204407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 05:10:56.204415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 05:10:56.204440 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:56.204448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-02 05:10:56.204489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-02 05:10:56.204510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 05:10:56.204519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-02 05:10:56.204526 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:56.204533 | orchestrator | 2026-02-02 05:10:56.204544 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-02 05:10:56.204552 | orchestrator | Monday 02 February 2026 05:10:44 +0000 (0:00:01.089) 0:05:47.034 ******* 2026-02-02 05:10:56.204559 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:10:56.204567 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:10:56.204574 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:10:56.204581 | orchestrator | 2026-02-02 05:10:56.204588 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-02 05:10:56.204595 | orchestrator | Monday 02 February 2026 05:10:46 +0000 (0:00:01.789) 0:05:48.823 ******* 2026-02-02 05:10:56.204602 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:10:56.204609 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:10:56.204616 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:10:56.204623 | orchestrator | 2026-02-02 05:10:56.204630 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-02 05:10:56.204638 | orchestrator | Monday 02 February 2026 05:10:48 +0000 (0:00:02.261) 0:05:51.085 ******* 2026-02-02 05:10:56.204645 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:56.204652 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:56.204659 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:56.204666 | orchestrator | 2026-02-02 05:10:56.204673 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-02 05:10:56.204680 | orchestrator | Monday 02 February 2026 05:10:48 +0000 (0:00:00.347) 0:05:51.432 ******* 2026-02-02 05:10:56.204687 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:56.204695 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:56.204702 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:56.204709 | orchestrator | 2026-02-02 05:10:56.204716 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-02 05:10:56.204724 | orchestrator | Monday 02 February 2026 05:10:49 +0000 (0:00:00.359) 0:05:51.791 ******* 2026-02-02 05:10:56.204733 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:56.204742 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:56.204750 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:56.204758 | orchestrator | 2026-02-02 05:10:56.204767 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-02 05:10:56.204775 | orchestrator | Monday 02 February 2026 05:10:49 +0000 (0:00:00.799) 0:05:52.591 ******* 2026-02-02 05:10:56.204784 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:56.204792 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:56.204800 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:56.204808 | orchestrator | 2026-02-02 05:10:56.204824 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-02 05:10:56.204832 | orchestrator | Monday 02 February 2026 05:10:50 +0000 (0:00:00.488) 0:05:53.080 ******* 2026-02-02 05:10:56.204841 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:56.204849 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:10:56.204857 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:10:56.204866 | orchestrator | 2026-02-02 05:10:56.204874 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-02 05:10:56.204881 | orchestrator | Monday 02 February 2026 05:10:50 +0000 (0:00:00.379) 0:05:53.459 ******* 2026-02-02 05:10:56.204888 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:10:56.204896 | orchestrator | 2026-02-02 05:10:56.204903 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-02 05:10:56.204911 | orchestrator | Monday 02 February 2026 05:10:53 +0000 (0:00:02.324) 0:05:55.784 ******* 2026-02-02 05:10:56.204919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-02 05:10:56.204934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-02 05:10:59.097946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-02 05:10:59.098157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:10:59.098183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:10:59.098215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-02 05:10:59.098227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:10:59.098238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:10:59.098267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-02 05:10:59.098279 | orchestrator | 2026-02-02 05:10:59.098315 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-02 05:10:59.098333 | orchestrator | Monday 02 February 2026 05:10:56 +0000 (0:00:03.067) 0:05:58.851 ******* 2026-02-02 05:10:59.098351 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 05:10:59.098368 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:10:59.098385 | orchestrator | } 2026-02-02 05:10:59.098402 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 05:10:59.098418 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:10:59.098428 | orchestrator | } 2026-02-02 05:10:59.098437 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 05:10:59.098455 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:10:59.098466 | orchestrator | } 2026-02-02 05:10:59.098478 | orchestrator | 2026-02-02 05:10:59.098490 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 05:10:59.098543 | orchestrator | Monday 02 February 2026 05:10:56 +0000 (0:00:00.414) 0:05:59.266 ******* 2026-02-02 05:10:59.098556 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-02 05:10:59.098568 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-02 05:10:59.098602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-02 05:10:59.098615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:10:59.098627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:10:59.098640 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:10:59.098652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-02 05:10:59.098673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:12:44.434081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:12:44.434181 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:12:44.434195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-02 05:12:44.434222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-02 05:12:44.434230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-02 05:12:44.434237 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:12:44.434244 | orchestrator | 2026-02-02 05:12:44.434252 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-02 05:12:44.434260 | orchestrator | Monday 02 February 2026 05:10:59 +0000 (0:00:02.474) 0:06:01.740 ******* 2026-02-02 05:12:44.434266 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:12:44.434274 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:12:44.434280 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:12:44.434287 | orchestrator | 2026-02-02 05:12:44.434339 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-02 05:12:44.434346 | orchestrator | Monday 02 February 2026 05:11:00 +0000 (0:00:01.184) 0:06:02.925 ******* 2026-02-02 05:12:44.434353 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:12:44.434359 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:12:44.434366 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:12:44.434372 | orchestrator | 2026-02-02 05:12:44.434379 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-02 05:12:44.434386 | orchestrator | Monday 02 February 2026 05:11:00 +0000 (0:00:00.466) 0:06:03.392 ******* 2026-02-02 05:12:44.434392 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:12:44.434399 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:12:44.434406 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:12:44.434412 | orchestrator | 2026-02-02 05:12:44.434419 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-02 05:12:44.434428 | orchestrator | Monday 02 February 2026 05:11:06 +0000 (0:00:06.116) 0:06:09.508 ******* 2026-02-02 05:12:44.434439 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:12:44.434450 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:12:44.434461 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:12:44.434472 | orchestrator | 2026-02-02 05:12:44.434483 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-02 05:12:44.434495 | orchestrator | Monday 02 February 2026 05:11:12 +0000 (0:00:06.035) 0:06:15.544 ******* 2026-02-02 05:12:44.434506 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:12:44.434518 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:12:44.434537 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:12:44.434545 | orchestrator | 2026-02-02 05:12:44.434551 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-02 05:12:44.434558 | orchestrator | Monday 02 February 2026 05:11:19 +0000 (0:00:06.432) 0:06:21.977 ******* 2026-02-02 05:12:44.434565 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:12:44.434572 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:12:44.434578 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:12:44.434585 | orchestrator | 2026-02-02 05:12:44.434607 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-02 05:12:44.434615 | orchestrator | Monday 02 February 2026 05:11:26 +0000 (0:00:06.918) 0:06:28.896 ******* 2026-02-02 05:12:44.434623 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:12:44.434630 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:12:44.434638 | orchestrator | 2026-02-02 05:12:44.434646 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-02 05:12:44.434653 | orchestrator | Monday 02 February 2026 05:11:29 +0000 (0:00:03.731) 0:06:32.628 ******* 2026-02-02 05:12:44.434661 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:12:44.434669 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:12:44.434682 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:12:44.434690 | orchestrator | 2026-02-02 05:12:44.434699 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-02 05:12:44.434710 | orchestrator | Monday 02 February 2026 05:11:41 +0000 (0:00:12.014) 0:06:44.642 ******* 2026-02-02 05:12:44.434721 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:12:44.434731 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:12:44.434742 | orchestrator | 2026-02-02 05:12:44.434752 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-02 05:12:44.434764 | orchestrator | Monday 02 February 2026 05:11:46 +0000 (0:00:04.345) 0:06:48.988 ******* 2026-02-02 05:12:44.434775 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:12:44.434787 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:12:44.434797 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:12:44.434808 | orchestrator | 2026-02-02 05:12:44.434815 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-02 05:12:44.434822 | orchestrator | Monday 02 February 2026 05:11:52 +0000 (0:00:06.267) 0:06:55.256 ******* 2026-02-02 05:12:44.434828 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:12:44.434835 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:12:44.434842 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:12:44.434848 | orchestrator | 2026-02-02 05:12:44.434855 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-02 05:12:44.434861 | orchestrator | Monday 02 February 2026 05:11:58 +0000 (0:00:05.884) 0:07:01.141 ******* 2026-02-02 05:12:44.434868 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:12:44.434875 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:12:44.434881 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:12:44.434888 | orchestrator | 2026-02-02 05:12:44.434894 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-02 05:12:44.434901 | orchestrator | Monday 02 February 2026 05:12:04 +0000 (0:00:05.799) 0:07:06.940 ******* 2026-02-02 05:12:44.434908 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:12:44.434915 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:12:44.434922 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:12:44.434928 | orchestrator | 2026-02-02 05:12:44.434935 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-02 05:12:44.434941 | orchestrator | Monday 02 February 2026 05:12:10 +0000 (0:00:05.818) 0:07:12.759 ******* 2026-02-02 05:12:44.434948 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:12:44.434955 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:12:44.434961 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:12:44.434968 | orchestrator | 2026-02-02 05:12:44.434974 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-02 05:12:44.434987 | orchestrator | Monday 02 February 2026 05:12:16 +0000 (0:00:06.222) 0:07:18.981 ******* 2026-02-02 05:12:44.434994 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:12:44.435001 | orchestrator | 2026-02-02 05:12:44.435007 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-02 05:12:44.435014 | orchestrator | Monday 02 February 2026 05:12:19 +0000 (0:00:03.600) 0:07:22.582 ******* 2026-02-02 05:12:44.435020 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:12:44.435027 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:12:44.435033 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:12:44.435040 | orchestrator | 2026-02-02 05:12:44.435047 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-02 05:12:44.435053 | orchestrator | Monday 02 February 2026 05:12:31 +0000 (0:00:11.639) 0:07:34.221 ******* 2026-02-02 05:12:44.435060 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:12:44.435066 | orchestrator | 2026-02-02 05:12:44.435073 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-02 05:12:44.435080 | orchestrator | Monday 02 February 2026 05:12:36 +0000 (0:00:04.560) 0:07:38.782 ******* 2026-02-02 05:12:44.435086 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:12:44.435093 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:12:44.435099 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:12:44.435106 | orchestrator | 2026-02-02 05:12:44.435113 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-02 05:12:44.435119 | orchestrator | Monday 02 February 2026 05:12:41 +0000 (0:00:05.681) 0:07:44.463 ******* 2026-02-02 05:12:44.435126 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:12:44.435133 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:12:44.435139 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:12:44.435146 | orchestrator | 2026-02-02 05:12:44.435153 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-02 05:12:44.435159 | orchestrator | Monday 02 February 2026 05:12:42 +0000 (0:00:00.985) 0:07:45.449 ******* 2026-02-02 05:12:44.435166 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:12:44.435173 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:12:44.435179 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:12:44.435186 | orchestrator | 2026-02-02 05:12:44.435193 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 05:12:44.435200 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-02 05:12:44.435209 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-02 05:12:44.435221 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-02 05:12:45.335014 | orchestrator | 2026-02-02 05:12:45.335115 | orchestrator | 2026-02-02 05:12:45.335131 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 05:12:45.335145 | orchestrator | Monday 02 February 2026 05:12:44 +0000 (0:00:01.633) 0:07:47.082 ******* 2026-02-02 05:12:45.335157 | orchestrator | =============================================================================== 2026-02-02 05:12:45.335168 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.01s 2026-02-02 05:12:45.335178 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 11.64s 2026-02-02 05:12:45.335208 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.23s 2026-02-02 05:12:45.335219 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 6.92s 2026-02-02 05:12:45.335230 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 6.43s 2026-02-02 05:12:45.335240 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.43s 2026-02-02 05:12:45.335251 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 6.27s 2026-02-02 05:12:45.335285 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 6.22s 2026-02-02 05:12:45.335361 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.17s 2026-02-02 05:12:45.335373 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 6.12s 2026-02-02 05:12:45.335384 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 6.04s 2026-02-02 05:12:45.335394 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 5.88s 2026-02-02 05:12:45.335405 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 5.82s 2026-02-02 05:12:45.335416 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 5.80s 2026-02-02 05:12:45.335427 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 5.68s 2026-02-02 05:12:45.335438 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.31s 2026-02-02 05:12:45.335448 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.11s 2026-02-02 05:12:45.335459 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.94s 2026-02-02 05:12:45.335470 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.78s 2026-02-02 05:12:45.335480 | orchestrator | loadbalancer : Wait for master proxysql to start ------------------------ 4.56s 2026-02-02 05:12:45.680680 | orchestrator | + osism apply -a upgrade opensearch 2026-02-02 05:12:47.815941 | orchestrator | 2026-02-02 05:12:47 | INFO  | Task 49209675-6ca4-4595-bc32-71de028a396a (opensearch) was prepared for execution. 2026-02-02 05:12:47.816043 | orchestrator | 2026-02-02 05:12:47 | INFO  | It takes a moment until task 49209675-6ca4-4595-bc32-71de028a396a (opensearch) has been started and output is visible here. 2026-02-02 05:13:06.729566 | orchestrator | 2026-02-02 05:13:06.729687 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 05:13:06.729704 | orchestrator | 2026-02-02 05:13:06.729716 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 05:13:06.729728 | orchestrator | Monday 02 February 2026 05:12:53 +0000 (0:00:01.526) 0:00:01.526 ******* 2026-02-02 05:13:06.729739 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:13:06.729752 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:13:06.729759 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:13:06.729766 | orchestrator | 2026-02-02 05:13:06.729772 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 05:13:06.729779 | orchestrator | Monday 02 February 2026 05:12:55 +0000 (0:00:01.812) 0:00:03.339 ******* 2026-02-02 05:13:06.729789 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-02 05:13:06.729800 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-02 05:13:06.729810 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-02 05:13:06.729820 | orchestrator | 2026-02-02 05:13:06.729829 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-02 05:13:06.729839 | orchestrator | 2026-02-02 05:13:06.729849 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-02 05:13:06.729859 | orchestrator | Monday 02 February 2026 05:12:57 +0000 (0:00:01.965) 0:00:05.304 ******* 2026-02-02 05:13:06.729869 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:13:06.729880 | orchestrator | 2026-02-02 05:13:06.729891 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-02 05:13:06.729901 | orchestrator | Monday 02 February 2026 05:13:00 +0000 (0:00:02.616) 0:00:07.921 ******* 2026-02-02 05:13:06.729913 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 05:13:06.729923 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 05:13:06.729934 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-02 05:13:06.729969 | orchestrator | 2026-02-02 05:13:06.729981 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-02 05:13:06.729992 | orchestrator | Monday 02 February 2026 05:13:02 +0000 (0:00:02.439) 0:00:10.360 ******* 2026-02-02 05:13:06.730067 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:13:06.730082 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:13:06.730112 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:13:06.730126 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:13:06.730153 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:13:06.730167 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:13:06.730175 | orchestrator | 2026-02-02 05:13:06.730183 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-02 05:13:06.730191 | orchestrator | Monday 02 February 2026 05:13:05 +0000 (0:00:02.370) 0:00:12.731 ******* 2026-02-02 05:13:06.730199 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:13:06.730207 | orchestrator | 2026-02-02 05:13:06.730219 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-02 05:13:12.210931 | orchestrator | Monday 02 February 2026 05:13:06 +0000 (0:00:01.668) 0:00:14.400 ******* 2026-02-02 05:13:12.211017 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:13:12.211046 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:13:12.211066 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:13:12.211075 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:13:12.211096 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:13:12.211110 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:13:12.211117 | orchestrator | 2026-02-02 05:13:12.211128 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-02 05:13:12.211135 | orchestrator | Monday 02 February 2026 05:13:10 +0000 (0:00:03.521) 0:00:17.922 ******* 2026-02-02 05:13:12.211142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:13:12.211155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 05:13:14.082342 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:13:14.082435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:13:14.082487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 05:13:14.082500 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:13:14.082510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:13:14.082537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 05:13:14.082553 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:13:14.082563 | orchestrator | 2026-02-02 05:13:14.082573 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-02 05:13:14.082583 | orchestrator | Monday 02 February 2026 05:13:12 +0000 (0:00:01.966) 0:00:19.888 ******* 2026-02-02 05:13:14.082592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:13:14.082606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 05:13:14.082617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:13:14.082626 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:13:14.082643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 05:13:17.958095 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:13:17.958229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:13:17.958272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 05:13:17.958288 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:13:17.958300 | orchestrator | 2026-02-02 05:13:17.958340 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-02 05:13:17.958360 | orchestrator | Monday 02 February 2026 05:13:14 +0000 (0:00:01.868) 0:00:21.757 ******* 2026-02-02 05:13:17.958372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:13:17.958426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:13:17.958439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:13:17.958457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:13:17.958470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:13:17.958499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:13:31.561433 | orchestrator | 2026-02-02 05:13:31.561540 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-02 05:13:31.561556 | orchestrator | Monday 02 February 2026 05:13:17 +0000 (0:00:03.875) 0:00:25.632 ******* 2026-02-02 05:13:31.561562 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:13:31.561568 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:13:31.561573 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:13:31.561578 | orchestrator | 2026-02-02 05:13:31.561583 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-02 05:13:31.561588 | orchestrator | Monday 02 February 2026 05:13:21 +0000 (0:00:03.527) 0:00:29.160 ******* 2026-02-02 05:13:31.561593 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:13:31.561598 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:13:31.561603 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:13:31.561608 | orchestrator | 2026-02-02 05:13:31.561613 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-02 05:13:31.561617 | orchestrator | Monday 02 February 2026 05:13:24 +0000 (0:00:02.988) 0:00:32.148 ******* 2026-02-02 05:13:31.561637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:13:31.561644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:13:31.561664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-02 05:13:31.561684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:13:31.561695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:13:31.561700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-02 05:13:31.561710 | orchestrator | 2026-02-02 05:13:31.561715 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-02 05:13:31.561721 | orchestrator | Monday 02 February 2026 05:13:28 +0000 (0:00:03.572) 0:00:35.720 ******* 2026-02-02 05:13:31.561726 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 05:13:31.561731 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:13:31.561736 | orchestrator | } 2026-02-02 05:13:31.561741 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 05:13:31.561746 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:13:31.561751 | orchestrator | } 2026-02-02 05:13:31.561756 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 05:13:31.561760 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:13:31.561765 | orchestrator | } 2026-02-02 05:13:31.561770 | orchestrator | 2026-02-02 05:13:31.561774 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 05:13:31.561779 | orchestrator | Monday 02 February 2026 05:13:29 +0000 (0:00:01.361) 0:00:37.081 ******* 2026-02-02 05:13:31.561790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:16:45.959845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 05:16:45.959953 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:16:45.959969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:16:45.959979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 05:16:45.959988 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:16:45.960012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-02 05:16:45.960028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-02 05:16:45.960048 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:16:45.960057 | orchestrator | 2026-02-02 05:16:45.960066 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-02 05:16:45.960077 | orchestrator | Monday 02 February 2026 05:13:31 +0000 (0:00:02.150) 0:00:39.232 ******* 2026-02-02 05:16:45.960086 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:16:45.960094 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:16:45.960103 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:16:45.960111 | orchestrator | 2026-02-02 05:16:45.960119 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-02 05:16:45.960128 | orchestrator | Monday 02 February 2026 05:13:33 +0000 (0:00:01.529) 0:00:40.762 ******* 2026-02-02 05:16:45.960136 | orchestrator | 2026-02-02 05:16:45.960145 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-02 05:16:45.960153 | orchestrator | Monday 02 February 2026 05:13:33 +0000 (0:00:00.468) 0:00:41.231 ******* 2026-02-02 05:16:45.960161 | orchestrator | 2026-02-02 05:16:45.960169 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-02 05:16:45.960179 | orchestrator | Monday 02 February 2026 05:13:34 +0000 (0:00:00.517) 0:00:41.748 ******* 2026-02-02 05:16:45.960188 | orchestrator | 2026-02-02 05:16:45.960196 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-02 05:16:45.960205 | orchestrator | Monday 02 February 2026 05:13:34 +0000 (0:00:00.814) 0:00:42.563 ******* 2026-02-02 05:16:45.960213 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:16:45.960224 | orchestrator | 2026-02-02 05:16:45.960232 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-02 05:16:45.960241 | orchestrator | Monday 02 February 2026 05:13:38 +0000 (0:00:03.462) 0:00:46.026 ******* 2026-02-02 05:16:45.960250 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:16:45.960259 | orchestrator | 2026-02-02 05:16:45.960267 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-02 05:16:45.960276 | orchestrator | Monday 02 February 2026 05:13:48 +0000 (0:00:09.782) 0:00:55.808 ******* 2026-02-02 05:16:45.960285 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:16:45.960294 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:16:45.960302 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:16:45.960312 | orchestrator | 2026-02-02 05:16:45.960321 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-02 05:16:45.960330 | orchestrator | Monday 02 February 2026 05:15:01 +0000 (0:01:13.343) 0:02:09.152 ******* 2026-02-02 05:16:45.960339 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:16:45.960348 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:16:45.960356 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:16:45.960365 | orchestrator | 2026-02-02 05:16:45.960375 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-02 05:16:45.960384 | orchestrator | Monday 02 February 2026 05:16:36 +0000 (0:01:34.697) 0:03:43.850 ******* 2026-02-02 05:16:45.960394 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:16:45.960403 | orchestrator | 2026-02-02 05:16:45.960413 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-02 05:16:45.960422 | orchestrator | Monday 02 February 2026 05:16:37 +0000 (0:00:01.701) 0:03:45.552 ******* 2026-02-02 05:16:45.960432 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:16:45.960440 | orchestrator | 2026-02-02 05:16:45.960450 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-02 05:16:45.960459 | orchestrator | Monday 02 February 2026 05:16:41 +0000 (0:00:03.386) 0:03:48.939 ******* 2026-02-02 05:16:45.960477 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:16:45.960486 | orchestrator | 2026-02-02 05:16:45.960495 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-02 05:16:45.960504 | orchestrator | Monday 02 February 2026 05:16:44 +0000 (0:00:03.430) 0:03:52.369 ******* 2026-02-02 05:16:45.960513 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:16:45.960545 | orchestrator | 2026-02-02 05:16:45.960555 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-02 05:16:45.960573 | orchestrator | Monday 02 February 2026 05:16:45 +0000 (0:00:01.261) 0:03:53.630 ******* 2026-02-02 05:16:48.392643 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:16:48.392731 | orchestrator | 2026-02-02 05:16:48.392742 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 05:16:48.392753 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 05:16:48.392762 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 05:16:48.392786 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 05:16:48.392794 | orchestrator | 2026-02-02 05:16:48.392801 | orchestrator | 2026-02-02 05:16:48.392808 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 05:16:48.392816 | orchestrator | Monday 02 February 2026 05:16:47 +0000 (0:00:02.021) 0:03:55.652 ******* 2026-02-02 05:16:48.392823 | orchestrator | =============================================================================== 2026-02-02 05:16:48.392830 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 94.70s 2026-02-02 05:16:48.392837 | orchestrator | opensearch : Restart opensearch container ------------------------------ 73.34s 2026-02-02 05:16:48.392844 | orchestrator | opensearch : Perform a flush -------------------------------------------- 9.78s 2026-02-02 05:16:48.392851 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.88s 2026-02-02 05:16:48.392858 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.57s 2026-02-02 05:16:48.392865 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.53s 2026-02-02 05:16:48.392872 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.52s 2026-02-02 05:16:48.392879 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.46s 2026-02-02 05:16:48.392886 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.43s 2026-02-02 05:16:48.392893 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.39s 2026-02-02 05:16:48.392900 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.99s 2026-02-02 05:16:48.392907 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.62s 2026-02-02 05:16:48.392914 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.44s 2026-02-02 05:16:48.392922 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.37s 2026-02-02 05:16:48.392929 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.15s 2026-02-02 05:16:48.392936 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.02s 2026-02-02 05:16:48.392943 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.97s 2026-02-02 05:16:48.392950 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.97s 2026-02-02 05:16:48.392958 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.87s 2026-02-02 05:16:48.392965 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.81s 2026-02-02 05:16:48.738762 | orchestrator | + osism apply -a upgrade memcached 2026-02-02 05:16:50.858850 | orchestrator | 2026-02-02 05:16:50 | INFO  | Task 5eb4900b-04d9-4316-91aa-2c4c95a06b87 (memcached) was prepared for execution. 2026-02-02 05:16:50.858949 | orchestrator | 2026-02-02 05:16:50 | INFO  | It takes a moment until task 5eb4900b-04d9-4316-91aa-2c4c95a06b87 (memcached) has been started and output is visible here. 2026-02-02 05:17:23.652205 | orchestrator | 2026-02-02 05:17:23.652355 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 05:17:23.652384 | orchestrator | 2026-02-02 05:17:23.652401 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 05:17:23.652419 | orchestrator | Monday 02 February 2026 05:16:56 +0000 (0:00:01.520) 0:00:01.520 ******* 2026-02-02 05:17:23.652436 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:17:23.652454 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:17:23.652472 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:17:23.652490 | orchestrator | 2026-02-02 05:17:23.652508 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 05:17:23.652526 | orchestrator | Monday 02 February 2026 05:16:58 +0000 (0:00:01.740) 0:00:03.260 ******* 2026-02-02 05:17:23.652546 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-02 05:17:23.652631 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-02 05:17:23.652652 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-02 05:17:23.652670 | orchestrator | 2026-02-02 05:17:23.652689 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-02 05:17:23.652708 | orchestrator | 2026-02-02 05:17:23.652728 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-02 05:17:23.652745 | orchestrator | Monday 02 February 2026 05:17:00 +0000 (0:00:01.761) 0:00:05.022 ******* 2026-02-02 05:17:23.652758 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:17:23.652771 | orchestrator | 2026-02-02 05:17:23.652783 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-02 05:17:23.652796 | orchestrator | Monday 02 February 2026 05:17:02 +0000 (0:00:02.067) 0:00:07.089 ******* 2026-02-02 05:17:23.652809 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-02 05:17:23.652822 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-02 05:17:23.652834 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-02 05:17:23.652846 | orchestrator | 2026-02-02 05:17:23.652859 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-02 05:17:23.652871 | orchestrator | Monday 02 February 2026 05:17:04 +0000 (0:00:01.935) 0:00:09.025 ******* 2026-02-02 05:17:23.652883 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-02 05:17:23.652896 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-02 05:17:23.652909 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-02 05:17:23.652921 | orchestrator | 2026-02-02 05:17:23.652933 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-02 05:17:23.652963 | orchestrator | Monday 02 February 2026 05:17:07 +0000 (0:00:02.775) 0:00:11.800 ******* 2026-02-02 05:17:23.652981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 05:17:23.652998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 05:17:23.653060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-02 05:17:23.653074 | orchestrator | 2026-02-02 05:17:23.653085 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-02 05:17:23.653096 | orchestrator | Monday 02 February 2026 05:17:09 +0000 (0:00:02.330) 0:00:14.130 ******* 2026-02-02 05:17:23.653106 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 05:17:23.653117 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:17:23.653131 | orchestrator | } 2026-02-02 05:17:23.653150 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 05:17:23.653168 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:17:23.653187 | orchestrator | } 2026-02-02 05:17:23.653205 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 05:17:23.653222 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:17:23.653239 | orchestrator | } 2026-02-02 05:17:23.653258 | orchestrator | 2026-02-02 05:17:23.653275 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 05:17:23.653293 | orchestrator | Monday 02 February 2026 05:17:10 +0000 (0:00:01.398) 0:00:15.529 ******* 2026-02-02 05:17:23.653310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 05:17:23.653329 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:17:23.653359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 05:17:23.653395 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:17:23.653416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-02 05:17:23.653432 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:17:23.653443 | orchestrator | 2026-02-02 05:17:23.653454 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-02 05:17:23.653464 | orchestrator | Monday 02 February 2026 05:17:12 +0000 (0:00:02.070) 0:00:17.599 ******* 2026-02-02 05:17:23.653475 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:17:23.653486 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:17:23.653497 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:17:23.653507 | orchestrator | 2026-02-02 05:17:23.653518 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 05:17:23.653530 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 05:17:23.653542 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 05:17:23.653594 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 05:17:23.653607 | orchestrator | 2026-02-02 05:17:23.653618 | orchestrator | 2026-02-02 05:17:23.653629 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 05:17:23.653650 | orchestrator | Monday 02 February 2026 05:17:23 +0000 (0:00:10.829) 0:00:28.428 ******* 2026-02-02 05:17:24.001157 | orchestrator | =============================================================================== 2026-02-02 05:17:24.001254 | orchestrator | memcached : Restart memcached container -------------------------------- 10.83s 2026-02-02 05:17:24.001269 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.78s 2026-02-02 05:17:24.001280 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.33s 2026-02-02 05:17:24.001292 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.07s 2026-02-02 05:17:24.001303 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.07s 2026-02-02 05:17:24.001313 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.94s 2026-02-02 05:17:24.001324 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.76s 2026-02-02 05:17:24.001336 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.74s 2026-02-02 05:17:24.001347 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.40s 2026-02-02 05:17:24.352117 | orchestrator | + osism apply -a upgrade redis 2026-02-02 05:17:26.487008 | orchestrator | 2026-02-02 05:17:26 | INFO  | Task 97d75695-54f0-41fe-8b85-d3376982f23f (redis) was prepared for execution. 2026-02-02 05:17:26.487144 | orchestrator | 2026-02-02 05:17:26 | INFO  | It takes a moment until task 97d75695-54f0-41fe-8b85-d3376982f23f (redis) has been started and output is visible here. 2026-02-02 05:17:44.047498 | orchestrator | 2026-02-02 05:17:44.047686 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 05:17:44.047710 | orchestrator | 2026-02-02 05:17:44.047723 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 05:17:44.047735 | orchestrator | Monday 02 February 2026 05:17:32 +0000 (0:00:01.732) 0:00:01.732 ******* 2026-02-02 05:17:44.047747 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:17:44.047760 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:17:44.047771 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:17:44.047784 | orchestrator | 2026-02-02 05:17:44.047797 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 05:17:44.047810 | orchestrator | Monday 02 February 2026 05:17:34 +0000 (0:00:01.679) 0:00:03.412 ******* 2026-02-02 05:17:44.047821 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-02 05:17:44.047834 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-02 05:17:44.047862 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-02 05:17:44.047870 | orchestrator | 2026-02-02 05:17:44.047877 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-02 05:17:44.047885 | orchestrator | 2026-02-02 05:17:44.047892 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-02 05:17:44.047899 | orchestrator | Monday 02 February 2026 05:17:36 +0000 (0:00:02.316) 0:00:05.729 ******* 2026-02-02 05:17:44.047907 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:17:44.047915 | orchestrator | 2026-02-02 05:17:44.047922 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-02 05:17:44.047930 | orchestrator | Monday 02 February 2026 05:17:38 +0000 (0:00:02.159) 0:00:07.889 ******* 2026-02-02 05:17:44.047941 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 05:17:44.047954 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 05:17:44.047962 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 05:17:44.047971 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 05:17:44.048018 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 05:17:44.048033 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 05:17:44.048042 | orchestrator | 2026-02-02 05:17:44.048051 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-02 05:17:44.048060 | orchestrator | Monday 02 February 2026 05:17:40 +0000 (0:00:02.162) 0:00:10.051 ******* 2026-02-02 05:17:44.048069 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 05:17:44.048078 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 05:17:44.048087 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 05:17:44.048096 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 05:17:44.048118 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 05:17:51.239840 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 05:17:51.239952 | orchestrator | 2026-02-02 05:17:51.239970 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-02 05:17:51.239983 | orchestrator | Monday 02 February 2026 05:17:44 +0000 (0:00:03.151) 0:00:13.203 ******* 2026-02-02 05:17:51.239996 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 05:17:51.240010 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 05:17:51.240021 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 05:17:51.240033 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 05:17:51.240068 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 05:17:51.240104 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 05:17:51.240118 | orchestrator | 2026-02-02 05:17:51.240130 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-02 05:17:51.240141 | orchestrator | Monday 02 February 2026 05:17:48 +0000 (0:00:04.067) 0:00:17.270 ******* 2026-02-02 05:17:51.240152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 05:17:51.240164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 05:17:51.240176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-02 05:17:51.240198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 05:17:51.240212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 05:17:51.240234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-02 05:18:19.027118 | orchestrator | 2026-02-02 05:18:19.027238 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-02 05:18:19.027256 | orchestrator | Monday 02 February 2026 05:17:51 +0000 (0:00:03.127) 0:00:20.398 ******* 2026-02-02 05:18:19.027269 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 05:18:19.027282 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:18:19.027293 | orchestrator | } 2026-02-02 05:18:19.027304 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 05:18:19.027315 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:18:19.027327 | orchestrator | } 2026-02-02 05:18:19.027338 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 05:18:19.027349 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:18:19.027360 | orchestrator | } 2026-02-02 05:18:19.027371 | orchestrator | 2026-02-02 05:18:19.027382 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 05:18:19.027393 | orchestrator | Monday 02 February 2026 05:17:52 +0000 (0:00:01.683) 0:00:22.082 ******* 2026-02-02 05:18:19.027407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-02 05:18:19.027470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-02 05:18:19.027521 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:18:19.027535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-02 05:18:19.027547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-02 05:18:19.027558 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:18:19.027569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-02 05:18:19.027607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-02 05:18:19.027620 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:18:19.027670 | orchestrator | 2026-02-02 05:18:19.027685 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-02 05:18:19.027697 | orchestrator | Monday 02 February 2026 05:17:54 +0000 (0:00:01.794) 0:00:23.877 ******* 2026-02-02 05:18:19.027710 | orchestrator | 2026-02-02 05:18:19.027723 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-02 05:18:19.027736 | orchestrator | Monday 02 February 2026 05:17:55 +0000 (0:00:00.448) 0:00:24.326 ******* 2026-02-02 05:18:19.027749 | orchestrator | 2026-02-02 05:18:19.027761 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-02 05:18:19.027783 | orchestrator | Monday 02 February 2026 05:17:55 +0000 (0:00:00.436) 0:00:24.762 ******* 2026-02-02 05:18:19.027796 | orchestrator | 2026-02-02 05:18:19.027810 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-02 05:18:19.027829 | orchestrator | Monday 02 February 2026 05:17:56 +0000 (0:00:00.767) 0:00:25.530 ******* 2026-02-02 05:18:19.027847 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:18:19.027864 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:18:19.027885 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:18:19.027903 | orchestrator | 2026-02-02 05:18:19.027921 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-02 05:18:19.027939 | orchestrator | Monday 02 February 2026 05:18:07 +0000 (0:00:10.854) 0:00:36.385 ******* 2026-02-02 05:18:19.027959 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:18:19.027978 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:18:19.027992 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:18:19.028006 | orchestrator | 2026-02-02 05:18:19.028019 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 05:18:19.028032 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 05:18:19.028045 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 05:18:19.028056 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 05:18:19.028066 | orchestrator | 2026-02-02 05:18:19.028077 | orchestrator | 2026-02-02 05:18:19.028088 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 05:18:19.028099 | orchestrator | Monday 02 February 2026 05:18:18 +0000 (0:00:11.343) 0:00:47.728 ******* 2026-02-02 05:18:19.028110 | orchestrator | =============================================================================== 2026-02-02 05:18:19.028120 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.34s 2026-02-02 05:18:19.028131 | orchestrator | redis : Restart redis container ---------------------------------------- 10.85s 2026-02-02 05:18:19.028142 | orchestrator | redis : Copying over redis config files --------------------------------- 4.07s 2026-02-02 05:18:19.028152 | orchestrator | redis : Copying over default config.json files -------------------------- 3.15s 2026-02-02 05:18:19.028163 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.13s 2026-02-02 05:18:19.028174 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.32s 2026-02-02 05:18:19.028184 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.16s 2026-02-02 05:18:19.028195 | orchestrator | redis : include_tasks --------------------------------------------------- 2.16s 2026-02-02 05:18:19.028206 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.80s 2026-02-02 05:18:19.028216 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.68s 2026-02-02 05:18:19.028227 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.68s 2026-02-02 05:18:19.028237 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.65s 2026-02-02 05:18:19.454112 | orchestrator | + osism apply -a upgrade mariadb 2026-02-02 05:18:21.603281 | orchestrator | 2026-02-02 05:18:21 | INFO  | Task 7032cae3-406c-4381-aa29-50d3cf4aae2a (mariadb) was prepared for execution. 2026-02-02 05:18:21.603385 | orchestrator | 2026-02-02 05:18:21 | INFO  | It takes a moment until task 7032cae3-406c-4381-aa29-50d3cf4aae2a (mariadb) has been started and output is visible here. 2026-02-02 05:18:47.608918 | orchestrator | 2026-02-02 05:18:47.609016 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 05:18:47.609028 | orchestrator | 2026-02-02 05:18:47.609056 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 05:18:47.609064 | orchestrator | Monday 02 February 2026 05:18:27 +0000 (0:00:01.644) 0:00:01.644 ******* 2026-02-02 05:18:47.609072 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:18:47.609081 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:18:47.609088 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:18:47.609095 | orchestrator | 2026-02-02 05:18:47.609114 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 05:18:47.609121 | orchestrator | Monday 02 February 2026 05:18:29 +0000 (0:00:01.785) 0:00:03.430 ******* 2026-02-02 05:18:47.609128 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-02 05:18:47.609136 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-02 05:18:47.609143 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-02 05:18:47.609151 | orchestrator | 2026-02-02 05:18:47.609158 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-02 05:18:47.609165 | orchestrator | 2026-02-02 05:18:47.609172 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-02 05:18:47.609179 | orchestrator | Monday 02 February 2026 05:18:31 +0000 (0:00:02.104) 0:00:05.534 ******* 2026-02-02 05:18:47.609187 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:18:47.609194 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-02 05:18:47.609201 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-02 05:18:47.609208 | orchestrator | 2026-02-02 05:18:47.609215 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-02 05:18:47.609222 | orchestrator | Monday 02 February 2026 05:18:33 +0000 (0:00:01.606) 0:00:07.141 ******* 2026-02-02 05:18:47.609230 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:18:47.609238 | orchestrator | 2026-02-02 05:18:47.609246 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-02 05:18:47.609253 | orchestrator | Monday 02 February 2026 05:18:34 +0000 (0:00:01.745) 0:00:08.887 ******* 2026-02-02 05:18:47.609265 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 05:18:47.609306 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 05:18:47.609316 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 05:18:47.609325 | orchestrator | 2026-02-02 05:18:47.609332 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-02 05:18:47.609340 | orchestrator | Monday 02 February 2026 05:18:38 +0000 (0:00:03.956) 0:00:12.843 ******* 2026-02-02 05:18:47.609347 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:18:47.609355 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:18:47.609368 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:18:47.609375 | orchestrator | 2026-02-02 05:18:47.609383 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-02 05:18:47.609390 | orchestrator | Monday 02 February 2026 05:18:40 +0000 (0:00:01.560) 0:00:14.404 ******* 2026-02-02 05:18:47.609397 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:18:47.609404 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:18:47.609411 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:18:47.609418 | orchestrator | 2026-02-02 05:18:47.609425 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-02 05:18:47.609433 | orchestrator | Monday 02 February 2026 05:18:42 +0000 (0:00:02.291) 0:00:16.695 ******* 2026-02-02 05:18:47.609450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 05:19:00.492392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 05:19:00.492554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 05:19:00.492574 | orchestrator | 2026-02-02 05:19:00.492588 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-02 05:19:00.492601 | orchestrator | Monday 02 February 2026 05:18:47 +0000 (0:00:04.801) 0:00:21.497 ******* 2026-02-02 05:19:00.492612 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:00.492624 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:00.492635 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:19:00.492646 | orchestrator | 2026-02-02 05:19:00.492657 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-02 05:19:00.492735 | orchestrator | Monday 02 February 2026 05:18:49 +0000 (0:00:02.162) 0:00:23.660 ******* 2026-02-02 05:19:00.492748 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:19:00.492758 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:19:00.492769 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:19:00.492780 | orchestrator | 2026-02-02 05:19:00.492791 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-02 05:19:00.492802 | orchestrator | Monday 02 February 2026 05:18:54 +0000 (0:00:05.140) 0:00:28.801 ******* 2026-02-02 05:19:00.492814 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:19:00.492825 | orchestrator | 2026-02-02 05:19:00.492836 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-02 05:19:00.492847 | orchestrator | Monday 02 February 2026 05:18:56 +0000 (0:00:01.977) 0:00:30.779 ******* 2026-02-02 05:19:00.492860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:00.492880 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:00.492906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:08.429316 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:08.429434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:08.429481 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:08.429493 | orchestrator | 2026-02-02 05:19:08.429506 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-02 05:19:08.429518 | orchestrator | Monday 02 February 2026 05:19:00 +0000 (0:00:03.603) 0:00:34.382 ******* 2026-02-02 05:19:08.429545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:08.429558 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:08.429591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:08.429612 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:08.429629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:08.429641 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:08.429652 | orchestrator | 2026-02-02 05:19:08.429663 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-02 05:19:08.429708 | orchestrator | Monday 02 February 2026 05:19:04 +0000 (0:00:03.685) 0:00:38.068 ******* 2026-02-02 05:19:08.429732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:13.153111 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:13.153213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:13.153226 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:13.153233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:13.153258 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:13.153265 | orchestrator | 2026-02-02 05:19:13.153272 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-02 05:19:13.153279 | orchestrator | Monday 02 February 2026 05:19:08 +0000 (0:00:04.248) 0:00:42.316 ******* 2026-02-02 05:19:13.153304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 05:19:13.153313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 05:19:13.153333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-02 05:19:29.022195 | orchestrator | 2026-02-02 05:19:29.022337 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-02 05:19:29.022358 | orchestrator | Monday 02 February 2026 05:19:13 +0000 (0:00:04.724) 0:00:47.040 ******* 2026-02-02 05:19:29.022371 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 05:19:29.022384 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:19:29.022396 | orchestrator | } 2026-02-02 05:19:29.022407 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 05:19:29.022418 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:19:29.022429 | orchestrator | } 2026-02-02 05:19:29.022440 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 05:19:29.022451 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:19:29.022461 | orchestrator | } 2026-02-02 05:19:29.022473 | orchestrator | 2026-02-02 05:19:29.022484 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 05:19:29.022495 | orchestrator | Monday 02 February 2026 05:19:14 +0000 (0:00:01.557) 0:00:48.598 ******* 2026-02-02 05:19:29.022535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:29.022552 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:29.022592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:29.022607 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:29.022619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:29.022640 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:29.022651 | orchestrator | 2026-02-02 05:19:29.022663 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-02 05:19:29.022675 | orchestrator | Monday 02 February 2026 05:19:19 +0000 (0:00:04.311) 0:00:52.909 ******* 2026-02-02 05:19:29.022710 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:29.022722 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:29.022733 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:29.022743 | orchestrator | 2026-02-02 05:19:29.022754 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-02 05:19:29.022765 | orchestrator | Monday 02 February 2026 05:19:20 +0000 (0:00:01.366) 0:00:54.275 ******* 2026-02-02 05:19:29.022776 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:29.022787 | orchestrator | 2026-02-02 05:19:29.022798 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-02 05:19:29.022809 | orchestrator | Monday 02 February 2026 05:19:21 +0000 (0:00:01.304) 0:00:55.580 ******* 2026-02-02 05:19:29.022819 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:29.022830 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:29.022840 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:29.022851 | orchestrator | 2026-02-02 05:19:29.022861 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-02 05:19:29.022872 | orchestrator | Monday 02 February 2026 05:19:23 +0000 (0:00:01.438) 0:00:57.019 ******* 2026-02-02 05:19:29.022883 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:29.022893 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:29.022904 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:29.022914 | orchestrator | 2026-02-02 05:19:29.022925 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-02 05:19:29.022936 | orchestrator | Monday 02 February 2026 05:19:24 +0000 (0:00:01.683) 0:00:58.703 ******* 2026-02-02 05:19:29.022947 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:29.022957 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:29.022968 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:29.022978 | orchestrator | 2026-02-02 05:19:29.022989 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-02 05:19:29.023000 | orchestrator | Monday 02 February 2026 05:19:26 +0000 (0:00:01.489) 0:01:00.193 ******* 2026-02-02 05:19:29.023017 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:29.023028 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:29.023038 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:29.023049 | orchestrator | 2026-02-02 05:19:29.023059 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-02 05:19:29.023070 | orchestrator | Monday 02 February 2026 05:19:27 +0000 (0:00:01.316) 0:01:01.509 ******* 2026-02-02 05:19:29.023081 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:29.023091 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:29.023102 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:29.023113 | orchestrator | 2026-02-02 05:19:29.023132 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-02 05:19:47.654272 | orchestrator | Monday 02 February 2026 05:19:29 +0000 (0:00:01.404) 0:01:02.913 ******* 2026-02-02 05:19:47.654378 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:47.654391 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:47.654401 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:47.654411 | orchestrator | 2026-02-02 05:19:47.654421 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-02 05:19:47.654430 | orchestrator | Monday 02 February 2026 05:19:30 +0000 (0:00:01.732) 0:01:04.646 ******* 2026-02-02 05:19:47.654439 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 05:19:47.654448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 05:19:47.654456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 05:19:47.654465 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:47.654474 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 05:19:47.654482 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 05:19:47.654491 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 05:19:47.654500 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:47.654508 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 05:19:47.654517 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 05:19:47.654525 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 05:19:47.654534 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:47.654543 | orchestrator | 2026-02-02 05:19:47.654552 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-02 05:19:47.654560 | orchestrator | Monday 02 February 2026 05:19:32 +0000 (0:00:01.482) 0:01:06.129 ******* 2026-02-02 05:19:47.654569 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:47.654578 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:47.654586 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:47.654595 | orchestrator | 2026-02-02 05:19:47.654604 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-02 05:19:47.654612 | orchestrator | Monday 02 February 2026 05:19:33 +0000 (0:00:01.388) 0:01:07.517 ******* 2026-02-02 05:19:47.654621 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:47.654629 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:47.654638 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:47.654646 | orchestrator | 2026-02-02 05:19:47.654655 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-02 05:19:47.654664 | orchestrator | Monday 02 February 2026 05:19:35 +0000 (0:00:01.558) 0:01:09.076 ******* 2026-02-02 05:19:47.654672 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:47.654681 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:47.654690 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:47.654737 | orchestrator | 2026-02-02 05:19:47.654754 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-02 05:19:47.654770 | orchestrator | Monday 02 February 2026 05:19:36 +0000 (0:00:01.417) 0:01:10.494 ******* 2026-02-02 05:19:47.654785 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:47.654799 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:47.654836 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:47.654847 | orchestrator | 2026-02-02 05:19:47.654857 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-02 05:19:47.654868 | orchestrator | Monday 02 February 2026 05:19:37 +0000 (0:00:01.359) 0:01:11.854 ******* 2026-02-02 05:19:47.654878 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:47.654887 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:47.654897 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:47.654907 | orchestrator | 2026-02-02 05:19:47.654917 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-02 05:19:47.654928 | orchestrator | Monday 02 February 2026 05:19:39 +0000 (0:00:01.361) 0:01:13.215 ******* 2026-02-02 05:19:47.654937 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:47.654947 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:47.654957 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:47.654967 | orchestrator | 2026-02-02 05:19:47.654977 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-02 05:19:47.654987 | orchestrator | Monday 02 February 2026 05:19:40 +0000 (0:00:01.617) 0:01:14.833 ******* 2026-02-02 05:19:47.654997 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:47.655013 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:47.655030 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:47.655053 | orchestrator | 2026-02-02 05:19:47.655068 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-02 05:19:47.655082 | orchestrator | Monday 02 February 2026 05:19:42 +0000 (0:00:01.478) 0:01:16.312 ******* 2026-02-02 05:19:47.655096 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:47.655109 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:47.655124 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:19:47.655138 | orchestrator | 2026-02-02 05:19:47.655152 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-02 05:19:47.655167 | orchestrator | Monday 02 February 2026 05:19:43 +0000 (0:00:01.383) 0:01:17.696 ******* 2026-02-02 05:19:47.655228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:47.655246 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:19:47.655270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:19:47.655282 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:19:47.655307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:20:05.741480 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:20:05.741593 | orchestrator | 2026-02-02 05:20:05.741614 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-02 05:20:05.741630 | orchestrator | Monday 02 February 2026 05:19:47 +0000 (0:00:03.843) 0:01:21.539 ******* 2026-02-02 05:20:05.741667 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:20:05.741681 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:20:05.741693 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:20:05.741705 | orchestrator | 2026-02-02 05:20:05.741799 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-02 05:20:05.741816 | orchestrator | Monday 02 February 2026 05:19:49 +0000 (0:00:01.713) 0:01:23.253 ******* 2026-02-02 05:20:05.741835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:20:05.741854 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:20:05.741900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:20:05.741930 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:20:05.741945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-02 05:20:05.741960 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:20:05.741972 | orchestrator | 2026-02-02 05:20:05.741986 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-02 05:20:05.742000 | orchestrator | Monday 02 February 2026 05:19:52 +0000 (0:00:03.633) 0:01:26.886 ******* 2026-02-02 05:20:05.742013 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:20:05.742089 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:20:05.742104 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:20:05.742119 | orchestrator | 2026-02-02 05:20:05.742134 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-02 05:20:05.742149 | orchestrator | Monday 02 February 2026 05:19:54 +0000 (0:00:01.899) 0:01:28.786 ******* 2026-02-02 05:20:05.742164 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:20:05.742179 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:20:05.742194 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:20:05.742209 | orchestrator | 2026-02-02 05:20:05.742224 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-02 05:20:05.742237 | orchestrator | Monday 02 February 2026 05:19:56 +0000 (0:00:01.669) 0:01:30.455 ******* 2026-02-02 05:20:05.742250 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:20:05.742259 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:20:05.742267 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:20:05.742275 | orchestrator | 2026-02-02 05:20:05.742283 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-02 05:20:05.742291 | orchestrator | Monday 02 February 2026 05:19:58 +0000 (0:00:01.560) 0:01:32.016 ******* 2026-02-02 05:20:05.742298 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:20:05.742306 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:20:05.742313 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:20:05.742322 | orchestrator | 2026-02-02 05:20:05.742329 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-02 05:20:05.742351 | orchestrator | Monday 02 February 2026 05:20:00 +0000 (0:00:01.929) 0:01:33.946 ******* 2026-02-02 05:20:05.742359 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:20:05.742367 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:20:05.742374 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:20:05.742381 | orchestrator | 2026-02-02 05:20:05.742388 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-02 05:20:05.742394 | orchestrator | Monday 02 February 2026 05:20:02 +0000 (0:00:02.110) 0:01:36.057 ******* 2026-02-02 05:20:05.742401 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:20:05.742409 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:20:05.742415 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:20:05.742422 | orchestrator | 2026-02-02 05:20:05.742429 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-02 05:20:05.742436 | orchestrator | Monday 02 February 2026 05:20:04 +0000 (0:00:01.963) 0:01:38.021 ******* 2026-02-02 05:20:05.742442 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:20:05.742449 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:20:05.742455 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:20:05.742462 | orchestrator | 2026-02-02 05:20:05.742469 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-02 05:20:05.742475 | orchestrator | Monday 02 February 2026 05:20:05 +0000 (0:00:01.384) 0:01:39.405 ******* 2026-02-02 05:20:05.742491 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:22:53.090505 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:22:53.090641 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:22:53.090657 | orchestrator | 2026-02-02 05:22:53.090670 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-02 05:22:53.090683 | orchestrator | Monday 02 February 2026 05:20:06 +0000 (0:00:01.403) 0:01:40.809 ******* 2026-02-02 05:22:53.090694 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:22:53.090705 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:22:53.090716 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:22:53.090727 | orchestrator | 2026-02-02 05:22:53.090738 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-02 05:22:53.090749 | orchestrator | Monday 02 February 2026 05:20:08 +0000 (0:00:02.039) 0:01:42.848 ******* 2026-02-02 05:22:53.090760 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:22:53.090771 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:22:53.090781 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:22:53.090792 | orchestrator | 2026-02-02 05:22:53.090803 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-02 05:22:53.090814 | orchestrator | Monday 02 February 2026 05:20:10 +0000 (0:00:01.332) 0:01:44.180 ******* 2026-02-02 05:22:53.090897 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:22:53.090910 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:22:53.090921 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:22:53.090931 | orchestrator | 2026-02-02 05:22:53.090942 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-02 05:22:53.090954 | orchestrator | Monday 02 February 2026 05:20:11 +0000 (0:00:01.476) 0:01:45.657 ******* 2026-02-02 05:22:53.090965 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:22:53.090976 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:22:53.090987 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:22:53.090998 | orchestrator | 2026-02-02 05:22:53.091009 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-02 05:22:53.091020 | orchestrator | Monday 02 February 2026 05:20:15 +0000 (0:00:03.950) 0:01:49.608 ******* 2026-02-02 05:22:53.091031 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:22:53.091042 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:22:53.091052 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:22:53.091063 | orchestrator | 2026-02-02 05:22:53.091074 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-02 05:22:53.091085 | orchestrator | Monday 02 February 2026 05:20:17 +0000 (0:00:01.538) 0:01:51.147 ******* 2026-02-02 05:22:53.091119 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:22:53.091130 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:22:53.091141 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:22:53.091152 | orchestrator | 2026-02-02 05:22:53.091163 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-02 05:22:53.091174 | orchestrator | Monday 02 February 2026 05:20:18 +0000 (0:00:01.422) 0:01:52.569 ******* 2026-02-02 05:22:53.091185 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:22:53.091196 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:22:53.091207 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:22:53.091218 | orchestrator | 2026-02-02 05:22:53.091229 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-02 05:22:53.091240 | orchestrator | Monday 02 February 2026 05:20:20 +0000 (0:00:01.776) 0:01:54.346 ******* 2026-02-02 05:22:53.091251 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:22:53.091261 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:22:53.091272 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:22:53.091283 | orchestrator | 2026-02-02 05:22:53.091294 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-02 05:22:53.091305 | orchestrator | Monday 02 February 2026 05:20:21 +0000 (0:00:01.466) 0:01:55.812 ******* 2026-02-02 05:22:53.091316 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:22:53.091327 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:22:53.091338 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:22:53.091348 | orchestrator | 2026-02-02 05:22:53.091359 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-02 05:22:53.091370 | orchestrator | Monday 02 February 2026 05:20:23 +0000 (0:00:01.618) 0:01:57.430 ******* 2026-02-02 05:22:53.091381 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:22:53.091392 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:22:53.091403 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:22:53.091414 | orchestrator | 2026-02-02 05:22:53.091424 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-02 05:22:53.091435 | orchestrator | Monday 02 February 2026 05:20:25 +0000 (0:00:01.613) 0:01:59.044 ******* 2026-02-02 05:22:53.091446 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:22:53.091457 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:22:53.091468 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:22:53.091479 | orchestrator | 2026-02-02 05:22:53.091489 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-02 05:22:53.091501 | orchestrator | 2026-02-02 05:22:53.091520 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-02 05:22:53.091556 | orchestrator | Monday 02 February 2026 05:20:27 +0000 (0:00:02.256) 0:02:01.300 ******* 2026-02-02 05:22:53.091576 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:22:53.091597 | orchestrator | 2026-02-02 05:22:53.091617 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-02 05:22:53.091635 | orchestrator | Monday 02 February 2026 05:20:55 +0000 (0:00:27.940) 0:02:29.240 ******* 2026-02-02 05:22:53.091650 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for MariaDB service port liveness (10 retries left). 2026-02-02 05:22:53.091662 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:22:53.091673 | orchestrator | 2026-02-02 05:22:53.091684 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-02 05:22:53.091695 | orchestrator | Monday 02 February 2026 05:21:03 +0000 (0:00:08.156) 0:02:37.397 ******* 2026-02-02 05:22:53.091705 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:22:53.091716 | orchestrator | 2026-02-02 05:22:53.091727 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-02 05:22:53.091737 | orchestrator | 2026-02-02 05:22:53.091748 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-02 05:22:53.091759 | orchestrator | Monday 02 February 2026 05:21:06 +0000 (0:00:02.980) 0:02:40.378 ******* 2026-02-02 05:22:53.091770 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:22:53.091791 | orchestrator | 2026-02-02 05:22:53.091847 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-02 05:22:53.091862 | orchestrator | Monday 02 February 2026 05:21:38 +0000 (0:00:32.479) 0:03:12.857 ******* 2026-02-02 05:22:53.091873 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:22:53.091884 | orchestrator | 2026-02-02 05:22:53.091894 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-02 05:22:53.091905 | orchestrator | Monday 02 February 2026 05:21:40 +0000 (0:00:01.242) 0:03:14.100 ******* 2026-02-02 05:22:53.091916 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:22:53.091926 | orchestrator | 2026-02-02 05:22:53.091937 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-02 05:22:53.091948 | orchestrator | 2026-02-02 05:22:53.091959 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-02 05:22:53.091970 | orchestrator | Monday 02 February 2026 05:21:43 +0000 (0:00:03.571) 0:03:17.671 ******* 2026-02-02 05:22:53.091980 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:22:53.091991 | orchestrator | 2026-02-02 05:22:53.092002 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-02 05:22:53.092013 | orchestrator | Monday 02 February 2026 05:22:10 +0000 (0:00:26.939) 0:03:44.611 ******* 2026-02-02 05:22:53.092023 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-02-02 05:22:53.092034 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:22:53.092045 | orchestrator | 2026-02-02 05:22:53.092056 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-02 05:22:53.092066 | orchestrator | Monday 02 February 2026 05:22:18 +0000 (0:00:07.904) 0:03:52.515 ******* 2026-02-02 05:22:53.092077 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-02 05:22:53.092088 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-02 05:22:53.092098 | orchestrator | mariadb_bootstrap_restart 2026-02-02 05:22:53.092109 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:22:53.092120 | orchestrator | 2026-02-02 05:22:53.092131 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-02 05:22:53.092141 | orchestrator | skipping: no hosts matched 2026-02-02 05:22:53.092152 | orchestrator | 2026-02-02 05:22:53.092163 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-02 05:22:53.092174 | orchestrator | skipping: no hosts matched 2026-02-02 05:22:53.092184 | orchestrator | 2026-02-02 05:22:53.092195 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-02 05:22:53.092206 | orchestrator | 2026-02-02 05:22:53.092216 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-02 05:22:53.092227 | orchestrator | Monday 02 February 2026 05:22:22 +0000 (0:00:04.295) 0:03:56.810 ******* 2026-02-02 05:22:53.092238 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:22:53.092248 | orchestrator | 2026-02-02 05:22:53.092259 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-02 05:22:53.092270 | orchestrator | Monday 02 February 2026 05:22:25 +0000 (0:00:02.107) 0:03:58.917 ******* 2026-02-02 05:22:53.092281 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:22:53.092291 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:22:53.092662 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:22:53.092689 | orchestrator | 2026-02-02 05:22:53.092701 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-02 05:22:53.092712 | orchestrator | Monday 02 February 2026 05:22:28 +0000 (0:00:03.143) 0:04:02.061 ******* 2026-02-02 05:22:53.092723 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:22:53.092734 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:22:53.092744 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:22:53.092755 | orchestrator | 2026-02-02 05:22:53.092766 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-02 05:22:53.092788 | orchestrator | Monday 02 February 2026 05:22:31 +0000 (0:00:03.224) 0:04:05.285 ******* 2026-02-02 05:22:53.092799 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:22:53.092810 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:22:53.092847 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:22:53.092859 | orchestrator | 2026-02-02 05:22:53.092870 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-02 05:22:53.092881 | orchestrator | Monday 02 February 2026 05:22:34 +0000 (0:00:03.239) 0:04:08.525 ******* 2026-02-02 05:22:53.092891 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:22:53.092902 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:22:53.092913 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:22:53.092924 | orchestrator | 2026-02-02 05:22:53.092934 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-02 05:22:53.092945 | orchestrator | Monday 02 February 2026 05:22:37 +0000 (0:00:03.177) 0:04:11.702 ******* 2026-02-02 05:22:53.092956 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:22:53.092976 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:22:53.092987 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:22:53.092998 | orchestrator | 2026-02-02 05:22:53.093009 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-02 05:22:53.093020 | orchestrator | Monday 02 February 2026 05:22:44 +0000 (0:00:06.489) 0:04:18.191 ******* 2026-02-02 05:22:53.093031 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:22:53.093041 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:22:53.093052 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:22:53.093063 | orchestrator | 2026-02-02 05:22:53.093074 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-02 05:22:53.093085 | orchestrator | Monday 02 February 2026 05:22:48 +0000 (0:00:03.748) 0:04:21.941 ******* 2026-02-02 05:22:53.093096 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:22:53.093106 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:22:53.093117 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:22:53.093128 | orchestrator | 2026-02-02 05:22:53.093139 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-02 05:22:53.093149 | orchestrator | Monday 02 February 2026 05:22:49 +0000 (0:00:01.595) 0:04:23.537 ******* 2026-02-02 05:22:53.093160 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:22:53.093171 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:22:53.093182 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:22:53.093192 | orchestrator | 2026-02-02 05:22:53.093214 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-02 05:23:14.162133 | orchestrator | Monday 02 February 2026 05:22:53 +0000 (0:00:03.437) 0:04:26.975 ******* 2026-02-02 05:23:14.162211 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:23:14.162218 | orchestrator | 2026-02-02 05:23:14.162223 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-02 05:23:14.162228 | orchestrator | Monday 02 February 2026 05:22:55 +0000 (0:00:02.096) 0:04:29.071 ******* 2026-02-02 05:23:14.162232 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:23:14.162238 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:23:14.162242 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:23:14.162246 | orchestrator | 2026-02-02 05:23:14.162250 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 05:23:14.162256 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-02 05:23:14.162262 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-02 05:23:14.162266 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-02 05:23:14.162270 | orchestrator | 2026-02-02 05:23:14.162274 | orchestrator | 2026-02-02 05:23:14.162294 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 05:23:14.162298 | orchestrator | Monday 02 February 2026 05:23:13 +0000 (0:00:18.470) 0:04:47.541 ******* 2026-02-02 05:23:14.162302 | orchestrator | =============================================================================== 2026-02-02 05:23:14.162306 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 87.36s 2026-02-02 05:23:14.162310 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 18.47s 2026-02-02 05:23:14.162314 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 17.30s 2026-02-02 05:23:14.162318 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.85s 2026-02-02 05:23:14.162322 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.49s 2026-02-02 05:23:14.162326 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.14s 2026-02-02 05:23:14.162330 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.80s 2026-02-02 05:23:14.162334 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.72s 2026-02-02 05:23:14.162338 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.31s 2026-02-02 05:23:14.162341 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.25s 2026-02-02 05:23:14.162345 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.96s 2026-02-02 05:23:14.162349 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.95s 2026-02-02 05:23:14.162353 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.84s 2026-02-02 05:23:14.162357 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.75s 2026-02-02 05:23:14.162361 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.69s 2026-02-02 05:23:14.162366 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.63s 2026-02-02 05:23:14.162370 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.60s 2026-02-02 05:23:14.162374 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.44s 2026-02-02 05:23:14.162378 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 3.24s 2026-02-02 05:23:14.162382 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.22s 2026-02-02 05:23:14.540117 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-02 05:23:16.665755 | orchestrator | 2026-02-02 05:23:16 | INFO  | Task 7cb35f7a-9c00-4dba-94b5-e46fbcd01587 (rabbitmq) was prepared for execution. 2026-02-02 05:23:16.665977 | orchestrator | 2026-02-02 05:23:16 | INFO  | It takes a moment until task 7cb35f7a-9c00-4dba-94b5-e46fbcd01587 (rabbitmq) has been started and output is visible here. 2026-02-02 05:24:01.608975 | orchestrator | 2026-02-02 05:24:01.609078 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 05:24:01.609095 | orchestrator | 2026-02-02 05:24:01.609107 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 05:24:01.609119 | orchestrator | Monday 02 February 2026 05:23:23 +0000 (0:00:01.943) 0:00:01.943 ******* 2026-02-02 05:24:01.609131 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:24:01.609142 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:24:01.609153 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:24:01.609164 | orchestrator | 2026-02-02 05:24:01.609176 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 05:24:01.609187 | orchestrator | Monday 02 February 2026 05:23:24 +0000 (0:00:01.783) 0:00:03.726 ******* 2026-02-02 05:24:01.609197 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-02 05:24:01.609209 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-02 05:24:01.609220 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-02 05:24:01.609231 | orchestrator | 2026-02-02 05:24:01.609262 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-02 05:24:01.609273 | orchestrator | 2026-02-02 05:24:01.609284 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-02 05:24:01.609295 | orchestrator | Monday 02 February 2026 05:23:26 +0000 (0:00:02.082) 0:00:05.808 ******* 2026-02-02 05:24:01.609306 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:24:01.609318 | orchestrator | 2026-02-02 05:24:01.609329 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-02 05:24:01.609340 | orchestrator | Monday 02 February 2026 05:23:30 +0000 (0:00:03.101) 0:00:08.910 ******* 2026-02-02 05:24:01.609350 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:24:01.609362 | orchestrator | 2026-02-02 05:24:01.609374 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-02 05:24:01.609385 | orchestrator | Monday 02 February 2026 05:23:32 +0000 (0:00:02.374) 0:00:11.284 ******* 2026-02-02 05:24:01.609395 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:24:01.609406 | orchestrator | 2026-02-02 05:24:01.609417 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-02 05:24:01.609428 | orchestrator | Monday 02 February 2026 05:23:35 +0000 (0:00:03.188) 0:00:14.472 ******* 2026-02-02 05:24:01.609438 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:24:01.609450 | orchestrator | 2026-02-02 05:24:01.609461 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-02 05:24:01.609472 | orchestrator | Monday 02 February 2026 05:23:45 +0000 (0:00:09.686) 0:00:24.159 ******* 2026-02-02 05:24:01.609483 | orchestrator | ok: [testbed-node-0] => { 2026-02-02 05:24:01.609494 | orchestrator |  "changed": false, 2026-02-02 05:24:01.609508 | orchestrator |  "msg": "All assertions passed" 2026-02-02 05:24:01.609521 | orchestrator | } 2026-02-02 05:24:01.609534 | orchestrator | 2026-02-02 05:24:01.609547 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-02 05:24:01.609560 | orchestrator | Monday 02 February 2026 05:23:46 +0000 (0:00:01.350) 0:00:25.510 ******* 2026-02-02 05:24:01.609573 | orchestrator | ok: [testbed-node-0] => { 2026-02-02 05:24:01.609584 | orchestrator |  "changed": false, 2026-02-02 05:24:01.609595 | orchestrator |  "msg": "All assertions passed" 2026-02-02 05:24:01.609606 | orchestrator | } 2026-02-02 05:24:01.609617 | orchestrator | 2026-02-02 05:24:01.609628 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-02 05:24:01.609638 | orchestrator | Monday 02 February 2026 05:23:48 +0000 (0:00:01.705) 0:00:27.215 ******* 2026-02-02 05:24:01.609649 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:24:01.609660 | orchestrator | 2026-02-02 05:24:01.609671 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-02 05:24:01.609682 | orchestrator | Monday 02 February 2026 05:23:50 +0000 (0:00:01.705) 0:00:28.921 ******* 2026-02-02 05:24:01.609693 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:24:01.609703 | orchestrator | 2026-02-02 05:24:01.609714 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-02 05:24:01.609725 | orchestrator | Monday 02 February 2026 05:23:52 +0000 (0:00:02.188) 0:00:31.109 ******* 2026-02-02 05:24:01.609736 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:24:01.609747 | orchestrator | 2026-02-02 05:24:01.609758 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-02 05:24:01.609768 | orchestrator | Monday 02 February 2026 05:23:55 +0000 (0:00:03.284) 0:00:34.394 ******* 2026-02-02 05:24:01.609779 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:24:01.609790 | orchestrator | 2026-02-02 05:24:01.609801 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-02 05:24:01.609812 | orchestrator | Monday 02 February 2026 05:23:57 +0000 (0:00:01.912) 0:00:36.306 ******* 2026-02-02 05:24:01.609883 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:24:01.609929 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:24:01.609953 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:24:01.609966 | orchestrator | 2026-02-02 05:24:01.609978 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-02 05:24:01.609989 | orchestrator | Monday 02 February 2026 05:23:59 +0000 (0:00:01.740) 0:00:38.047 ******* 2026-02-02 05:24:01.610000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:24:01.610070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:24:20.708552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:24:20.708673 | orchestrator | 2026-02-02 05:24:20.708691 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-02 05:24:20.708705 | orchestrator | Monday 02 February 2026 05:24:01 +0000 (0:00:02.452) 0:00:40.499 ******* 2026-02-02 05:24:20.708716 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-02 05:24:20.708729 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-02 05:24:20.708740 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-02 05:24:20.708751 | orchestrator | 2026-02-02 05:24:20.708762 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-02 05:24:20.708773 | orchestrator | Monday 02 February 2026 05:24:03 +0000 (0:00:02.318) 0:00:42.817 ******* 2026-02-02 05:24:20.708785 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-02 05:24:20.708796 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-02 05:24:20.708807 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-02 05:24:20.708817 | orchestrator | 2026-02-02 05:24:20.708828 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-02 05:24:20.708861 | orchestrator | Monday 02 February 2026 05:24:06 +0000 (0:00:02.934) 0:00:45.752 ******* 2026-02-02 05:24:20.708973 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-02 05:24:20.708987 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-02 05:24:20.708998 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-02 05:24:20.709009 | orchestrator | 2026-02-02 05:24:20.709019 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-02 05:24:20.709030 | orchestrator | Monday 02 February 2026 05:24:09 +0000 (0:00:02.352) 0:00:48.104 ******* 2026-02-02 05:24:20.709041 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-02 05:24:20.709051 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-02 05:24:20.709062 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-02 05:24:20.709073 | orchestrator | 2026-02-02 05:24:20.709086 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-02 05:24:20.709099 | orchestrator | Monday 02 February 2026 05:24:11 +0000 (0:00:02.400) 0:00:50.505 ******* 2026-02-02 05:24:20.709111 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-02 05:24:20.709124 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-02 05:24:20.709136 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-02 05:24:20.709148 | orchestrator | 2026-02-02 05:24:20.709160 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-02 05:24:20.709187 | orchestrator | Monday 02 February 2026 05:24:13 +0000 (0:00:02.270) 0:00:52.775 ******* 2026-02-02 05:24:20.709201 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-02 05:24:20.709215 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-02 05:24:20.709227 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-02 05:24:20.709239 | orchestrator | 2026-02-02 05:24:20.709252 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-02 05:24:20.709265 | orchestrator | Monday 02 February 2026 05:24:16 +0000 (0:00:02.611) 0:00:55.387 ******* 2026-02-02 05:24:20.709277 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:24:20.709290 | orchestrator | 2026-02-02 05:24:20.709322 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-02 05:24:20.709336 | orchestrator | Monday 02 February 2026 05:24:18 +0000 (0:00:01.717) 0:00:57.104 ******* 2026-02-02 05:24:20.709351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:24:20.709376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:24:20.709391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:24:20.709406 | orchestrator | 2026-02-02 05:24:20.709424 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-02 05:24:20.709438 | orchestrator | Monday 02 February 2026 05:24:20 +0000 (0:00:02.273) 0:00:59.378 ******* 2026-02-02 05:24:20.709460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 05:24:30.510497 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:24:30.510634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 05:24:30.510693 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:24:30.510711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 05:24:30.510722 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:24:30.510732 | orchestrator | 2026-02-02 05:24:30.510743 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-02 05:24:30.510754 | orchestrator | Monday 02 February 2026 05:24:21 +0000 (0:00:01.467) 0:01:00.845 ******* 2026-02-02 05:24:30.510779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 05:24:30.510812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 05:24:30.510832 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:24:30.510842 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:24:30.510852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 05:24:30.510862 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:24:30.510948 | orchestrator | 2026-02-02 05:24:30.510965 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-02 05:24:30.510976 | orchestrator | Monday 02 February 2026 05:24:23 +0000 (0:00:01.936) 0:01:02.782 ******* 2026-02-02 05:24:30.510987 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:24:30.510999 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:24:30.511010 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:24:30.511021 | orchestrator | 2026-02-02 05:24:30.511034 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-02 05:24:30.511048 | orchestrator | Monday 02 February 2026 05:24:28 +0000 (0:00:04.227) 0:01:07.010 ******* 2026-02-02 05:24:30.511068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:24:30.511093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:26:16.816546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-02 05:26:16.816662 | orchestrator | 2026-02-02 05:26:16.816681 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-02 05:26:16.816695 | orchestrator | Monday 02 February 2026 05:24:30 +0000 (0:00:02.397) 0:01:09.408 ******* 2026-02-02 05:26:16.816707 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 05:26:16.816720 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:26:16.816732 | orchestrator | } 2026-02-02 05:26:16.816744 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 05:26:16.816755 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:26:16.816766 | orchestrator | } 2026-02-02 05:26:16.816778 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 05:26:16.816789 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:26:16.816800 | orchestrator | } 2026-02-02 05:26:16.816812 | orchestrator | 2026-02-02 05:26:16.816823 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 05:26:16.816834 | orchestrator | Monday 02 February 2026 05:24:32 +0000 (0:00:01.623) 0:01:11.031 ******* 2026-02-02 05:26:16.816865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 05:26:16.816878 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:26:16.816890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 05:26:16.816924 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:26:16.817034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-02 05:26:16.817048 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:26:16.817060 | orchestrator | 2026-02-02 05:26:16.817072 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-02 05:26:16.817086 | orchestrator | Monday 02 February 2026 05:24:34 +0000 (0:00:02.205) 0:01:13.236 ******* 2026-02-02 05:26:16.817100 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:26:16.817114 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:26:16.817127 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:26:16.817141 | orchestrator | 2026-02-02 05:26:16.817154 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-02 05:26:16.817167 | orchestrator | 2026-02-02 05:26:16.817181 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-02 05:26:16.817194 | orchestrator | Monday 02 February 2026 05:24:36 +0000 (0:00:02.057) 0:01:15.293 ******* 2026-02-02 05:26:16.817208 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:26:16.817222 | orchestrator | 2026-02-02 05:26:16.817235 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-02 05:26:16.817249 | orchestrator | Monday 02 February 2026 05:24:38 +0000 (0:00:02.022) 0:01:17.316 ******* 2026-02-02 05:26:16.817262 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:26:16.817275 | orchestrator | 2026-02-02 05:26:16.817288 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-02 05:26:16.817301 | orchestrator | Monday 02 February 2026 05:24:47 +0000 (0:00:09.269) 0:01:26.585 ******* 2026-02-02 05:26:16.817315 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:26:16.817328 | orchestrator | 2026-02-02 05:26:16.817342 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-02 05:26:16.817354 | orchestrator | Monday 02 February 2026 05:24:56 +0000 (0:00:09.187) 0:01:35.773 ******* 2026-02-02 05:26:16.817365 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:26:16.817377 | orchestrator | 2026-02-02 05:26:16.817389 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-02 05:26:16.817400 | orchestrator | 2026-02-02 05:26:16.817411 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-02 05:26:16.817432 | orchestrator | Monday 02 February 2026 05:25:06 +0000 (0:00:09.993) 0:01:45.767 ******* 2026-02-02 05:26:16.817443 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:26:16.817455 | orchestrator | 2026-02-02 05:26:16.817466 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-02 05:26:16.817478 | orchestrator | Monday 02 February 2026 05:25:08 +0000 (0:00:01.803) 0:01:47.570 ******* 2026-02-02 05:26:16.817489 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:26:16.817501 | orchestrator | 2026-02-02 05:26:16.817512 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-02 05:26:16.817523 | orchestrator | Monday 02 February 2026 05:25:18 +0000 (0:00:09.406) 0:01:56.977 ******* 2026-02-02 05:26:16.817541 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:26:16.817553 | orchestrator | 2026-02-02 05:26:16.817565 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-02 05:26:16.817576 | orchestrator | Monday 02 February 2026 05:25:32 +0000 (0:00:14.240) 0:02:11.217 ******* 2026-02-02 05:26:16.817587 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:26:16.817599 | orchestrator | 2026-02-02 05:26:16.817610 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-02 05:26:16.817622 | orchestrator | 2026-02-02 05:26:16.817633 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-02 05:26:16.817645 | orchestrator | Monday 02 February 2026 05:25:42 +0000 (0:00:10.351) 0:02:21.569 ******* 2026-02-02 05:26:16.817656 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:26:16.817667 | orchestrator | 2026-02-02 05:26:16.817679 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-02 05:26:16.817690 | orchestrator | Monday 02 February 2026 05:25:44 +0000 (0:00:01.679) 0:02:23.249 ******* 2026-02-02 05:26:16.817702 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:26:16.817713 | orchestrator | 2026-02-02 05:26:16.817725 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-02 05:26:16.817747 | orchestrator | Monday 02 February 2026 05:25:53 +0000 (0:00:08.805) 0:02:32.054 ******* 2026-02-02 05:26:16.817760 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:26:16.817772 | orchestrator | 2026-02-02 05:26:16.817783 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-02 05:26:16.817794 | orchestrator | Monday 02 February 2026 05:26:06 +0000 (0:00:13.552) 0:02:45.606 ******* 2026-02-02 05:26:16.817805 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:26:16.817816 | orchestrator | 2026-02-02 05:26:16.817827 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-02 05:26:16.817838 | orchestrator | 2026-02-02 05:26:16.817850 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-02 05:26:16.817872 | orchestrator | Monday 02 February 2026 05:26:16 +0000 (0:00:10.097) 0:02:55.703 ******* 2026-02-02 05:26:23.108825 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:26:23.108921 | orchestrator | 2026-02-02 05:26:23.108932 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-02 05:26:23.108978 | orchestrator | Monday 02 February 2026 05:26:18 +0000 (0:00:01.378) 0:02:57.082 ******* 2026-02-02 05:26:23.108984 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:26:23.108992 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:26:23.108997 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:26:23.109004 | orchestrator | 2026-02-02 05:26:23.109010 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 05:26:23.109017 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 05:26:23.109025 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 05:26:23.109032 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-02 05:26:23.109060 | orchestrator | 2026-02-02 05:26:23.109067 | orchestrator | 2026-02-02 05:26:23.109073 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 05:26:23.109079 | orchestrator | Monday 02 February 2026 05:26:22 +0000 (0:00:04.470) 0:03:01.552 ******* 2026-02-02 05:26:23.109085 | orchestrator | =============================================================================== 2026-02-02 05:26:23.109091 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 36.98s 2026-02-02 05:26:23.109098 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 30.44s 2026-02-02 05:26:23.109104 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 27.48s 2026-02-02 05:26:23.109109 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.69s 2026-02-02 05:26:23.109115 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.51s 2026-02-02 05:26:23.109121 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.47s 2026-02-02 05:26:23.109128 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.23s 2026-02-02 05:26:23.109134 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.28s 2026-02-02 05:26:23.109140 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.19s 2026-02-02 05:26:23.109146 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 3.10s 2026-02-02 05:26:23.109152 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.93s 2026-02-02 05:26:23.109158 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.61s 2026-02-02 05:26:23.109164 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.45s 2026-02-02 05:26:23.109168 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.40s 2026-02-02 05:26:23.109172 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.40s 2026-02-02 05:26:23.109176 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.37s 2026-02-02 05:26:23.109180 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.35s 2026-02-02 05:26:23.109183 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.32s 2026-02-02 05:26:23.109187 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.27s 2026-02-02 05:26:23.109191 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.27s 2026-02-02 05:26:23.475773 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-02 05:26:25.646501 | orchestrator | 2026-02-02 05:26:25 | INFO  | Task 839e0906-0ec3-4a01-9af4-333e3bf4fa24 (openvswitch) was prepared for execution. 2026-02-02 05:26:25.646624 | orchestrator | 2026-02-02 05:26:25 | INFO  | It takes a moment until task 839e0906-0ec3-4a01-9af4-333e3bf4fa24 (openvswitch) has been started and output is visible here. 2026-02-02 05:26:43.411849 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-02 05:26:43.411997 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-02 05:26:43.412027 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-02 05:26:43.412036 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-02 05:26:43.412055 | orchestrator | 2026-02-02 05:26:43.412066 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 05:26:43.412076 | orchestrator | 2026-02-02 05:26:43.412085 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 05:26:43.412095 | orchestrator | Monday 02 February 2026 05:26:31 +0000 (0:00:01.255) 0:00:01.255 ******* 2026-02-02 05:26:43.412126 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:26:43.412133 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:26:43.412139 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:26:43.412215 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:26:43.412228 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:26:43.412233 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:26:43.412239 | orchestrator | 2026-02-02 05:26:43.412244 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 05:26:43.412250 | orchestrator | Monday 02 February 2026 05:26:32 +0000 (0:00:01.514) 0:00:02.769 ******* 2026-02-02 05:26:43.412255 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 05:26:43.412261 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 05:26:43.412266 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 05:26:43.412272 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 05:26:43.412277 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 05:26:43.412282 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-02 05:26:43.412287 | orchestrator | 2026-02-02 05:26:43.412296 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-02 05:26:43.412304 | orchestrator | 2026-02-02 05:26:43.412312 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-02 05:26:43.412319 | orchestrator | Monday 02 February 2026 05:26:33 +0000 (0:00:01.089) 0:00:03.858 ******* 2026-02-02 05:26:43.412328 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 05:26:43.412337 | orchestrator | 2026-02-02 05:26:43.412345 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-02 05:26:43.412353 | orchestrator | Monday 02 February 2026 05:26:35 +0000 (0:00:01.792) 0:00:05.651 ******* 2026-02-02 05:26:43.412361 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-02 05:26:43.412372 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-02 05:26:43.412383 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-02 05:26:43.412394 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-02 05:26:43.412405 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-02 05:26:43.412416 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-02 05:26:43.412428 | orchestrator | 2026-02-02 05:26:43.412442 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-02 05:26:43.412462 | orchestrator | Monday 02 February 2026 05:26:37 +0000 (0:00:01.491) 0:00:07.143 ******* 2026-02-02 05:26:43.412482 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-02 05:26:43.412501 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-02 05:26:43.412515 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-02 05:26:43.412526 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-02 05:26:43.412537 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-02 05:26:43.412549 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-02 05:26:43.412561 | orchestrator | 2026-02-02 05:26:43.412579 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-02 05:26:43.412591 | orchestrator | Monday 02 February 2026 05:26:38 +0000 (0:00:01.466) 0:00:08.609 ******* 2026-02-02 05:26:43.412604 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-02 05:26:43.412616 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:26:43.412629 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-02 05:26:43.412640 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:26:43.412651 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-02 05:26:43.412676 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:26:43.412690 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-02 05:26:43.412703 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:26:43.412714 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-02 05:26:43.412722 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:26:43.412730 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-02 05:26:43.412744 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:26:43.412752 | orchestrator | 2026-02-02 05:26:43.412760 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-02 05:26:43.412767 | orchestrator | Monday 02 February 2026 05:26:40 +0000 (0:00:01.868) 0:00:10.477 ******* 2026-02-02 05:26:43.412775 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:26:43.412783 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:26:43.412790 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:26:43.412798 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:26:43.412805 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:26:43.412831 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:26:43.412839 | orchestrator | 2026-02-02 05:26:43.412847 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-02 05:26:43.412855 | orchestrator | Monday 02 February 2026 05:26:41 +0000 (0:00:01.116) 0:00:11.594 ******* 2026-02-02 05:26:43.412865 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:43.412878 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:43.412887 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:43.412895 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:43.412913 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:43.412928 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:45.654258 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:45.654344 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:45.654354 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:45.654378 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:45.654398 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:45.654418 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:45.654427 | orchestrator | 2026-02-02 05:26:45.654435 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-02 05:26:45.654443 | orchestrator | Monday 02 February 2026 05:26:43 +0000 (0:00:01.871) 0:00:13.466 ******* 2026-02-02 05:26:45.654450 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:45.654457 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:45.654465 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:45.654480 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:45.654487 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:45.654500 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:49.375039 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:49.375124 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:49.375149 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:49.375167 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:49.375174 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:49.375193 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:49.375200 | orchestrator | 2026-02-02 05:26:49.375207 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-02 05:26:49.375215 | orchestrator | Monday 02 February 2026 05:26:45 +0000 (0:00:02.342) 0:00:15.808 ******* 2026-02-02 05:26:49.375221 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:26:49.375228 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:26:49.375233 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:26:49.375239 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:26:49.375245 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:26:49.375251 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:26:49.375256 | orchestrator | 2026-02-02 05:26:49.375263 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-02 05:26:49.375268 | orchestrator | Monday 02 February 2026 05:26:47 +0000 (0:00:01.551) 0:00:17.360 ******* 2026-02-02 05:26:49.375280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:49.375287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:49.375297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:49.375303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:49.375315 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:50.757354 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-02 05:26:50.757472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:50.757492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:50.757519 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:50.757533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:50.757563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:50.757578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-02 05:26:50.757585 | orchestrator | 2026-02-02 05:26:50.757594 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-02 05:26:50.757602 | orchestrator | Monday 02 February 2026 05:26:49 +0000 (0:00:02.186) 0:00:19.547 ******* 2026-02-02 05:26:50.757610 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 05:26:50.757618 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:26:50.757625 | orchestrator | } 2026-02-02 05:26:50.757632 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 05:26:50.757639 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:26:50.757646 | orchestrator | } 2026-02-02 05:26:50.757653 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 05:26:50.757661 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:26:50.757672 | orchestrator | } 2026-02-02 05:26:50.757681 | orchestrator | changed: [testbed-node-3] => { 2026-02-02 05:26:50.757688 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:26:50.757695 | orchestrator | } 2026-02-02 05:26:50.757701 | orchestrator | changed: [testbed-node-4] => { 2026-02-02 05:26:50.757708 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:26:50.757715 | orchestrator | } 2026-02-02 05:26:50.757721 | orchestrator | changed: [testbed-node-5] => { 2026-02-02 05:26:50.757728 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:26:50.757734 | orchestrator | } 2026-02-02 05:26:50.757741 | orchestrator | 2026-02-02 05:26:50.757748 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 05:26:50.757755 | orchestrator | Monday 02 February 2026 05:26:50 +0000 (0:00:00.919) 0:00:20.466 ******* 2026-02-02 05:26:50.757766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-02 05:26:50.757774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-02 05:26:50.757781 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:26:50.757793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-02 05:26:50.757805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-02 05:27:15.373524 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:27:15.373620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-02 05:27:15.373635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-02 05:27:15.373643 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:27:15.373663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-02 05:27:15.373670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-02 05:27:15.373693 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-02 05:27:15.373700 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-02 05:27:15.373714 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:27:15.373720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-02 05:27:15.373740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-02 05:27:15.373748 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:27:15.373754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-02 05:27:15.373764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-02 05:27:15.373776 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:27:15.373783 | orchestrator | 2026-02-02 05:27:15.373790 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 05:27:15.373797 | orchestrator | Monday 02 February 2026 05:26:52 +0000 (0:00:01.890) 0:00:22.356 ******* 2026-02-02 05:27:15.373803 | orchestrator | 2026-02-02 05:27:15.373809 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 05:27:15.373816 | orchestrator | Monday 02 February 2026 05:26:52 +0000 (0:00:00.156) 0:00:22.513 ******* 2026-02-02 05:27:15.373822 | orchestrator | 2026-02-02 05:27:15.373828 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 05:27:15.373834 | orchestrator | Monday 02 February 2026 05:26:52 +0000 (0:00:00.148) 0:00:22.661 ******* 2026-02-02 05:27:15.373840 | orchestrator | 2026-02-02 05:27:15.373846 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 05:27:15.373852 | orchestrator | Monday 02 February 2026 05:26:52 +0000 (0:00:00.157) 0:00:22.819 ******* 2026-02-02 05:27:15.373858 | orchestrator | 2026-02-02 05:27:15.373865 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 05:27:15.373871 | orchestrator | Monday 02 February 2026 05:26:53 +0000 (0:00:00.404) 0:00:23.224 ******* 2026-02-02 05:27:15.373877 | orchestrator | 2026-02-02 05:27:15.373883 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-02 05:27:15.373889 | orchestrator | Monday 02 February 2026 05:26:53 +0000 (0:00:00.162) 0:00:23.386 ******* 2026-02-02 05:27:15.373895 | orchestrator | 2026-02-02 05:27:15.373902 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-02 05:27:15.373908 | orchestrator | Monday 02 February 2026 05:26:53 +0000 (0:00:00.151) 0:00:23.538 ******* 2026-02-02 05:27:15.373914 | orchestrator | changed: [testbed-node-3] 2026-02-02 05:27:15.373920 | orchestrator | changed: [testbed-node-4] 2026-02-02 05:27:15.373926 | orchestrator | changed: [testbed-node-5] 2026-02-02 05:27:15.373932 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:27:15.373938 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:27:15.373944 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:27:15.373951 | orchestrator | 2026-02-02 05:27:15.373957 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-02 05:27:15.373963 | orchestrator | Monday 02 February 2026 05:27:04 +0000 (0:00:10.727) 0:00:34.265 ******* 2026-02-02 05:27:15.374010 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:27:15.374060 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:27:15.374066 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:27:15.374072 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:27:15.374078 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:27:15.374084 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:27:15.374090 | orchestrator | 2026-02-02 05:27:15.374096 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-02 05:27:15.374103 | orchestrator | Monday 02 February 2026 05:27:05 +0000 (0:00:01.087) 0:00:35.352 ******* 2026-02-02 05:27:15.374112 | orchestrator | changed: [testbed-node-3] 2026-02-02 05:27:15.374128 | orchestrator | changed: [testbed-node-4] 2026-02-02 05:27:28.392450 | orchestrator | changed: [testbed-node-5] 2026-02-02 05:27:28.392570 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:27:28.392586 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:27:28.392598 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:27:28.392611 | orchestrator | 2026-02-02 05:27:28.392625 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-02 05:27:28.392638 | orchestrator | Monday 02 February 2026 05:27:15 +0000 (0:00:10.076) 0:00:45.429 ******* 2026-02-02 05:27:28.392650 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-02 05:27:28.392663 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-02 05:27:28.392674 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-02 05:27:28.392707 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-02 05:27:28.392719 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-02 05:27:28.392730 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-02 05:27:28.392740 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-02 05:27:28.392751 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-02 05:27:28.392762 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-02 05:27:28.392773 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-02 05:27:28.392784 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-02 05:27:28.392810 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-02 05:27:28.392821 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 05:27:28.392832 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 05:27:28.392843 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 05:27:28.392854 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 05:27:28.392864 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 05:27:28.392875 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-02 05:27:28.392886 | orchestrator | 2026-02-02 05:27:28.392897 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-02 05:27:28.392908 | orchestrator | Monday 02 February 2026 05:27:21 +0000 (0:00:06.439) 0:00:51.868 ******* 2026-02-02 05:27:28.392919 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-02 05:27:28.392931 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:27:28.392942 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-02 05:27:28.392952 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:27:28.392963 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-02 05:27:28.392974 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:27:28.393030 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-02 05:27:28.393044 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-02 05:27:28.393056 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-02 05:27:28.393069 | orchestrator | 2026-02-02 05:27:28.393082 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-02 05:27:28.393095 | orchestrator | Monday 02 February 2026 05:27:24 +0000 (0:00:02.218) 0:00:54.086 ******* 2026-02-02 05:27:28.393108 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-02 05:27:28.393121 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:27:28.393132 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-02 05:27:28.393143 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:27:28.393154 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-02 05:27:28.393165 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:27:28.393176 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-02 05:27:28.393186 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-02 05:27:28.393207 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-02 05:27:28.393218 | orchestrator | 2026-02-02 05:27:28.393229 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 05:27:28.393241 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 05:27:28.393254 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 05:27:28.393284 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-02 05:27:28.393296 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 05:27:28.393307 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 05:27:28.393318 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-02 05:27:28.393328 | orchestrator | 2026-02-02 05:27:28.393339 | orchestrator | 2026-02-02 05:27:28.393350 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 05:27:28.393361 | orchestrator | Monday 02 February 2026 05:27:27 +0000 (0:00:03.876) 0:00:57.963 ******* 2026-02-02 05:27:28.393372 | orchestrator | =============================================================================== 2026-02-02 05:27:28.393383 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.73s 2026-02-02 05:27:28.393393 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.08s 2026-02-02 05:27:28.393404 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.44s 2026-02-02 05:27:28.393415 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.88s 2026-02-02 05:27:28.393426 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.34s 2026-02-02 05:27:28.393436 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.22s 2026-02-02 05:27:28.393447 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.19s 2026-02-02 05:27:28.393458 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.89s 2026-02-02 05:27:28.393468 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.87s 2026-02-02 05:27:28.393485 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.87s 2026-02-02 05:27:28.393496 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.79s 2026-02-02 05:27:28.393506 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.55s 2026-02-02 05:27:28.393517 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.51s 2026-02-02 05:27:28.393528 | orchestrator | module-load : Load modules ---------------------------------------------- 1.49s 2026-02-02 05:27:28.393538 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.47s 2026-02-02 05:27:28.393548 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.18s 2026-02-02 05:27:28.393559 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.12s 2026-02-02 05:27:28.393570 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.09s 2026-02-02 05:27:28.393580 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.09s 2026-02-02 05:27:28.393591 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.92s 2026-02-02 05:27:28.749070 | orchestrator | + osism apply -a upgrade ovn 2026-02-02 05:27:30.917827 | orchestrator | 2026-02-02 05:27:30 | INFO  | Task c7599abc-d25f-48bb-ae6f-73220a72c3a7 (ovn) was prepared for execution. 2026-02-02 05:27:30.917936 | orchestrator | 2026-02-02 05:27:30 | INFO  | It takes a moment until task c7599abc-d25f-48bb-ae6f-73220a72c3a7 (ovn) has been started and output is visible here. 2026-02-02 05:27:55.370421 | orchestrator | 2026-02-02 05:27:55.370518 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-02 05:27:55.370530 | orchestrator | 2026-02-02 05:27:55.370539 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-02 05:27:55.370548 | orchestrator | Monday 02 February 2026 05:27:37 +0000 (0:00:02.269) 0:00:02.269 ******* 2026-02-02 05:27:55.370556 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:27:55.370566 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:27:55.370574 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:27:55.370581 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:27:55.370589 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:27:55.370597 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:27:55.370616 | orchestrator | 2026-02-02 05:27:55.370624 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-02 05:27:55.370632 | orchestrator | Monday 02 February 2026 05:27:40 +0000 (0:00:02.991) 0:00:05.260 ******* 2026-02-02 05:27:55.370640 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-02 05:27:55.370648 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-02 05:27:55.370656 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-02 05:27:55.370664 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-02 05:27:55.370671 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-02 05:27:55.370679 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-02 05:27:55.370687 | orchestrator | 2026-02-02 05:27:55.370694 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-02 05:27:55.370702 | orchestrator | 2026-02-02 05:27:55.370710 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-02 05:27:55.370718 | orchestrator | Monday 02 February 2026 05:27:43 +0000 (0:00:03.336) 0:00:08.597 ******* 2026-02-02 05:27:55.370726 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 05:27:55.370735 | orchestrator | 2026-02-02 05:27:55.370743 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-02 05:27:55.370751 | orchestrator | Monday 02 February 2026 05:27:47 +0000 (0:00:03.998) 0:00:12.596 ******* 2026-02-02 05:27:55.370761 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370771 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370779 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370819 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370828 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370851 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370860 | orchestrator | 2026-02-02 05:27:55.370868 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-02 05:27:55.370876 | orchestrator | Monday 02 February 2026 05:27:50 +0000 (0:00:02.524) 0:00:15.120 ******* 2026-02-02 05:27:55.370884 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370892 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370900 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370908 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370916 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370928 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370942 | orchestrator | 2026-02-02 05:27:55.370950 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-02 05:27:55.370958 | orchestrator | Monday 02 February 2026 05:27:53 +0000 (0:00:02.578) 0:00:17.699 ******* 2026-02-02 05:27:55.370966 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370974 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:27:55.370989 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.175874 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.175963 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.175975 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.175984 | orchestrator | 2026-02-02 05:28:03.175993 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-02 05:28:03.176046 | orchestrator | Monday 02 February 2026 05:27:55 +0000 (0:00:02.352) 0:00:20.051 ******* 2026-02-02 05:28:03.176054 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.176080 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.176099 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.176106 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.176113 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.176134 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.176142 | orchestrator | 2026-02-02 05:28:03.176149 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-02 05:28:03.176155 | orchestrator | Monday 02 February 2026 05:27:58 +0000 (0:00:03.011) 0:00:23.063 ******* 2026-02-02 05:28:03.176164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.176174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.176181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.176194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.176201 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.176212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:28:03.176219 | orchestrator | 2026-02-02 05:28:03.176226 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-02 05:28:03.176234 | orchestrator | Monday 02 February 2026 05:28:01 +0000 (0:00:02.695) 0:00:25.759 ******* 2026-02-02 05:28:03.176241 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 05:28:03.176249 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:28:03.176278 | orchestrator | } 2026-02-02 05:28:03.176286 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 05:28:03.176292 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:28:03.176299 | orchestrator | } 2026-02-02 05:28:03.176306 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 05:28:03.176312 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:28:03.176319 | orchestrator | } 2026-02-02 05:28:03.176326 | orchestrator | changed: [testbed-node-3] => { 2026-02-02 05:28:03.176332 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:28:03.176339 | orchestrator | } 2026-02-02 05:28:03.176346 | orchestrator | changed: [testbed-node-4] => { 2026-02-02 05:28:03.176352 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:28:03.176359 | orchestrator | } 2026-02-02 05:28:03.176366 | orchestrator | changed: [testbed-node-5] => { 2026-02-02 05:28:03.176373 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:28:03.176379 | orchestrator | } 2026-02-02 05:28:03.176386 | orchestrator | 2026-02-02 05:28:03.176393 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 05:28:03.176400 | orchestrator | Monday 02 February 2026 05:28:03 +0000 (0:00:01.989) 0:00:27.748 ******* 2026-02-02 05:28:03.176413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:28:33.098590 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:28:33.098718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:28:33.098774 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:28:33.098790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:28:33.098804 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:28:33.098817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:28:33.098830 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:28:33.098844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:28:33.098857 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:28:33.098887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:28:33.098901 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:28:33.098913 | orchestrator | 2026-02-02 05:28:33.098927 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-02 05:28:33.098941 | orchestrator | Monday 02 February 2026 05:28:05 +0000 (0:00:02.573) 0:00:30.321 ******* 2026-02-02 05:28:33.098954 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:28:33.098968 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:28:33.098983 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:28:33.098995 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:28:33.099007 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:28:33.099047 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:28:33.099061 | orchestrator | 2026-02-02 05:28:33.099073 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-02 05:28:33.099086 | orchestrator | Monday 02 February 2026 05:28:09 +0000 (0:00:03.590) 0:00:33.912 ******* 2026-02-02 05:28:33.099099 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-02 05:28:33.099114 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-02 05:28:33.099128 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-02 05:28:33.099141 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-02 05:28:33.099154 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-02 05:28:33.099168 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-02 05:28:33.099182 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 05:28:33.099210 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 05:28:33.099224 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 05:28:33.099236 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 05:28:33.099248 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 05:28:33.099285 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-02 05:28:33.099299 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-02 05:28:33.099315 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-02 05:28:33.099330 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-02 05:28:33.099345 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-02 05:28:33.099360 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-02 05:28:33.099373 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-02 05:28:33.099387 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 05:28:33.099401 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 05:28:33.099415 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 05:28:33.099429 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 05:28:33.099443 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 05:28:33.099456 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-02 05:28:33.099469 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 05:28:33.099483 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 05:28:33.099495 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 05:28:33.099509 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 05:28:33.099523 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 05:28:33.099537 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-02 05:28:33.099549 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 05:28:33.099562 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 05:28:33.099585 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 05:28:33.099599 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 05:28:33.099612 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 05:28:33.099626 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-02 05:28:33.099638 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-02 05:28:33.099651 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-02 05:28:33.099678 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-02 05:28:33.099692 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-02 05:28:33.099705 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-02 05:28:33.099719 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-02 05:28:33.099733 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-02 05:28:33.099755 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-02 05:28:33.099768 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-02 05:28:33.099782 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-02 05:28:33.099795 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-02 05:28:33.099820 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-02 05:31:21.769485 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-02 05:31:21.769605 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-02 05:31:21.769627 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-02 05:31:21.769641 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-02 05:31:21.769651 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-02 05:31:21.769659 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-02 05:31:21.769666 | orchestrator | 2026-02-02 05:31:21.769675 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 05:31:21.769683 | orchestrator | Monday 02 February 2026 05:28:29 +0000 (0:00:20.665) 0:00:54.578 ******* 2026-02-02 05:31:21.769690 | orchestrator | 2026-02-02 05:31:21.769698 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 05:31:21.769705 | orchestrator | Monday 02 February 2026 05:28:30 +0000 (0:00:00.468) 0:00:55.046 ******* 2026-02-02 05:31:21.769712 | orchestrator | 2026-02-02 05:31:21.769720 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 05:31:21.769727 | orchestrator | Monday 02 February 2026 05:28:30 +0000 (0:00:00.436) 0:00:55.483 ******* 2026-02-02 05:31:21.769734 | orchestrator | 2026-02-02 05:31:21.769741 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 05:31:21.769749 | orchestrator | Monday 02 February 2026 05:28:31 +0000 (0:00:00.443) 0:00:55.927 ******* 2026-02-02 05:31:21.769756 | orchestrator | 2026-02-02 05:31:21.769763 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 05:31:21.769770 | orchestrator | Monday 02 February 2026 05:28:31 +0000 (0:00:00.447) 0:00:56.374 ******* 2026-02-02 05:31:21.769778 | orchestrator | 2026-02-02 05:31:21.769785 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-02 05:31:21.769792 | orchestrator | Monday 02 February 2026 05:28:32 +0000 (0:00:00.473) 0:00:56.848 ******* 2026-02-02 05:31:21.769818 | orchestrator | 2026-02-02 05:31:21.769825 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-02 05:31:21.769833 | orchestrator | Monday 02 February 2026 05:28:33 +0000 (0:00:00.874) 0:00:57.722 ******* 2026-02-02 05:31:21.769840 | orchestrator | 2026-02-02 05:31:21.769847 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-02-02 05:31:21.769855 | orchestrator | changed: [testbed-node-3] 2026-02-02 05:31:21.769864 | orchestrator | changed: [testbed-node-5] 2026-02-02 05:31:21.769873 | orchestrator | changed: [testbed-node-4] 2026-02-02 05:31:21.769885 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:31:21.769897 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:31:21.769909 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:31:21.769921 | orchestrator | 2026-02-02 05:31:21.769948 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-02 05:31:21.769960 | orchestrator | 2026-02-02 05:31:21.769972 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-02 05:31:21.769984 | orchestrator | Monday 02 February 2026 05:30:44 +0000 (0:02:11.652) 0:03:09.375 ******* 2026-02-02 05:31:21.769997 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:31:21.770010 | orchestrator | 2026-02-02 05:31:21.770081 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-02 05:31:21.770090 | orchestrator | Monday 02 February 2026 05:30:46 +0000 (0:00:01.990) 0:03:11.366 ******* 2026-02-02 05:31:21.770098 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-02 05:31:21.770107 | orchestrator | 2026-02-02 05:31:21.770115 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-02 05:31:21.770147 | orchestrator | Monday 02 February 2026 05:30:48 +0000 (0:00:01.946) 0:03:13.312 ******* 2026-02-02 05:31:21.770156 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:31:21.770165 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:31:21.770173 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:31:21.770182 | orchestrator | 2026-02-02 05:31:21.770190 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-02 05:31:21.770198 | orchestrator | Monday 02 February 2026 05:30:50 +0000 (0:00:01.885) 0:03:15.197 ******* 2026-02-02 05:31:21.770206 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:31:21.770214 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:31:21.770222 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:31:21.770230 | orchestrator | 2026-02-02 05:31:21.770240 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-02 05:31:21.770255 | orchestrator | Monday 02 February 2026 05:30:51 +0000 (0:00:01.378) 0:03:16.576 ******* 2026-02-02 05:31:21.770268 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:31:21.770282 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:31:21.770295 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:31:21.770309 | orchestrator | 2026-02-02 05:31:21.770323 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-02 05:31:21.770338 | orchestrator | Monday 02 February 2026 05:30:53 +0000 (0:00:01.346) 0:03:17.923 ******* 2026-02-02 05:31:21.770352 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:31:21.770363 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:31:21.770372 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:31:21.770380 | orchestrator | 2026-02-02 05:31:21.770388 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-02 05:31:21.770395 | orchestrator | Monday 02 February 2026 05:30:54 +0000 (0:00:01.715) 0:03:19.638 ******* 2026-02-02 05:31:21.770402 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:31:21.770425 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:31:21.770433 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:31:21.770440 | orchestrator | 2026-02-02 05:31:21.770447 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-02 05:31:21.770454 | orchestrator | Monday 02 February 2026 05:30:56 +0000 (0:00:01.484) 0:03:21.123 ******* 2026-02-02 05:31:21.770471 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:31:21.770479 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:31:21.770486 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:31:21.770493 | orchestrator | 2026-02-02 05:31:21.770500 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-02 05:31:21.770507 | orchestrator | Monday 02 February 2026 05:30:57 +0000 (0:00:01.393) 0:03:22.517 ******* 2026-02-02 05:31:21.770514 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:31:21.770521 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:31:21.770528 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:31:21.770535 | orchestrator | 2026-02-02 05:31:21.770542 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-02 05:31:21.770549 | orchestrator | Monday 02 February 2026 05:30:59 +0000 (0:00:01.832) 0:03:24.349 ******* 2026-02-02 05:31:21.770557 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:31:21.770564 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:31:21.770571 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:31:21.770578 | orchestrator | 2026-02-02 05:31:21.770585 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-02 05:31:21.770592 | orchestrator | Monday 02 February 2026 05:31:01 +0000 (0:00:01.677) 0:03:26.028 ******* 2026-02-02 05:31:21.770599 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:31:21.770606 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:31:21.770613 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:31:21.770620 | orchestrator | 2026-02-02 05:31:21.770633 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-02 05:31:21.770645 | orchestrator | Monday 02 February 2026 05:31:03 +0000 (0:00:01.834) 0:03:27.862 ******* 2026-02-02 05:31:21.770658 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:31:21.770669 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:31:21.770681 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:31:21.770693 | orchestrator | 2026-02-02 05:31:21.770704 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-02 05:31:21.770716 | orchestrator | Monday 02 February 2026 05:31:04 +0000 (0:00:01.467) 0:03:29.330 ******* 2026-02-02 05:31:21.770729 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:31:21.770741 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:31:21.770753 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:31:21.770766 | orchestrator | 2026-02-02 05:31:21.770778 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-02 05:31:21.770791 | orchestrator | Monday 02 February 2026 05:31:06 +0000 (0:00:01.426) 0:03:30.756 ******* 2026-02-02 05:31:21.770799 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:31:21.770806 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:31:21.770813 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:31:21.770821 | orchestrator | 2026-02-02 05:31:21.770828 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-02 05:31:21.770835 | orchestrator | Monday 02 February 2026 05:31:07 +0000 (0:00:01.356) 0:03:32.113 ******* 2026-02-02 05:31:21.770842 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:31:21.770849 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:31:21.770856 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:31:21.770863 | orchestrator | 2026-02-02 05:31:21.770870 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-02 05:31:21.770884 | orchestrator | Monday 02 February 2026 05:31:09 +0000 (0:00:01.792) 0:03:33.905 ******* 2026-02-02 05:31:21.770891 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:31:21.770898 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:31:21.770905 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:31:21.770912 | orchestrator | 2026-02-02 05:31:21.770919 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-02 05:31:21.770926 | orchestrator | Monday 02 February 2026 05:31:10 +0000 (0:00:01.394) 0:03:35.300 ******* 2026-02-02 05:31:21.770934 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:31:21.770941 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:31:21.770954 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:31:21.770961 | orchestrator | 2026-02-02 05:31:21.770968 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-02 05:31:21.770975 | orchestrator | Monday 02 February 2026 05:31:12 +0000 (0:00:02.090) 0:03:37.391 ******* 2026-02-02 05:31:21.770982 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:31:21.770990 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:31:21.770996 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:31:21.771003 | orchestrator | 2026-02-02 05:31:21.771011 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-02 05:31:21.771018 | orchestrator | Monday 02 February 2026 05:31:14 +0000 (0:00:01.451) 0:03:38.843 ******* 2026-02-02 05:31:21.771025 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:31:21.771032 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:31:21.771039 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:31:21.771046 | orchestrator | 2026-02-02 05:31:21.771053 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-02 05:31:21.771060 | orchestrator | Monday 02 February 2026 05:31:15 +0000 (0:00:01.575) 0:03:40.419 ******* 2026-02-02 05:31:21.771067 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:31:21.771077 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:31:21.771092 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:31:21.771110 | orchestrator | 2026-02-02 05:31:21.771143 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-02 05:31:21.771155 | orchestrator | Monday 02 February 2026 05:31:17 +0000 (0:00:01.774) 0:03:42.193 ******* 2026-02-02 05:31:21.771179 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:28.068613 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:28.068719 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:28.068738 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:28.068768 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:28.068803 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:28.068816 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:28.068828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:28.068859 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:28.068872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:28.068884 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:28.068895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:28.068916 | orchestrator | 2026-02-02 05:31:28.068929 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-02 05:31:28.068942 | orchestrator | Monday 02 February 2026 05:31:21 +0000 (0:00:04.250) 0:03:46.444 ******* 2026-02-02 05:31:28.068960 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:28.068972 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:28.068984 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:28.068996 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:28.069015 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:43.321573 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:43.321678 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:43.321717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:43.321743 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:43.321756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:43.321767 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:43.321778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:43.321790 | orchestrator | 2026-02-02 05:31:43.321803 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-02 05:31:43.321817 | orchestrator | Monday 02 February 2026 05:31:28 +0000 (0:00:06.307) 0:03:52.752 ******* 2026-02-02 05:31:43.321840 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-02 05:31:43.321852 | orchestrator | 2026-02-02 05:31:43.321864 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-02 05:31:43.321875 | orchestrator | Monday 02 February 2026 05:31:29 +0000 (0:00:01.778) 0:03:54.530 ******* 2026-02-02 05:31:43.321886 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:31:43.321897 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:31:43.321923 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:31:43.321934 | orchestrator | 2026-02-02 05:31:43.321945 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-02 05:31:43.321956 | orchestrator | Monday 02 February 2026 05:31:31 +0000 (0:00:02.017) 0:03:56.548 ******* 2026-02-02 05:31:43.321975 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:31:43.321986 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:31:43.321997 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:31:43.322007 | orchestrator | 2026-02-02 05:31:43.322072 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-02 05:31:43.322084 | orchestrator | Monday 02 February 2026 05:31:34 +0000 (0:00:02.713) 0:03:59.261 ******* 2026-02-02 05:31:43.322095 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:31:43.322106 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:31:43.322117 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:31:43.322160 | orchestrator | 2026-02-02 05:31:43.322182 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-02 05:31:43.322203 | orchestrator | Monday 02 February 2026 05:31:37 +0000 (0:00:02.852) 0:04:02.114 ******* 2026-02-02 05:31:43.322225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:43.322251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:43.322265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:43.322282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:43.322301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:43.322315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:43.322347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:47.903999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:47.904104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:47.904166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:47.904187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:31:47.904198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:47.904207 | orchestrator | 2026-02-02 05:31:47.904218 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-02 05:31:47.904228 | orchestrator | Monday 02 February 2026 05:31:43 +0000 (0:00:05.880) 0:04:07.995 ******* 2026-02-02 05:31:47.904237 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 05:31:47.904247 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:31:47.904256 | orchestrator | } 2026-02-02 05:31:47.904265 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 05:31:47.904273 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:31:47.904303 | orchestrator | } 2026-02-02 05:31:47.904313 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 05:31:47.904321 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:31:47.904330 | orchestrator | } 2026-02-02 05:31:47.904339 | orchestrator | 2026-02-02 05:31:47.904354 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-02 05:31:47.904370 | orchestrator | Monday 02 February 2026 05:31:44 +0000 (0:00:01.425) 0:04:09.420 ******* 2026-02-02 05:31:47.904386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:47.904420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:47.904430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:47.904446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:47.904463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:47.904478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:47.904494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:47.904512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:47.904523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-02 05:31:47.904541 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-02 05:33:22.951918 | orchestrator | 2026-02-02 05:33:22.952026 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-02 05:33:22.952039 | orchestrator | Monday 02 February 2026 05:31:47 +0000 (0:00:03.161) 0:04:12.581 ******* 2026-02-02 05:33:22.952048 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-02 05:33:22.952057 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-02 05:33:22.952065 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-02 05:33:22.952073 | orchestrator | 2026-02-02 05:33:22.952081 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-02 05:33:22.952090 | orchestrator | Monday 02 February 2026 05:31:50 +0000 (0:00:02.200) 0:04:14.782 ******* 2026-02-02 05:33:22.952098 | orchestrator | changed: [testbed-node-0] => { 2026-02-02 05:33:22.952108 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:33:22.952116 | orchestrator | } 2026-02-02 05:33:22.952124 | orchestrator | changed: [testbed-node-1] => { 2026-02-02 05:33:22.952146 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:33:22.952154 | orchestrator | } 2026-02-02 05:33:22.952161 | orchestrator | changed: [testbed-node-2] => { 2026-02-02 05:33:22.952168 | orchestrator |  "msg": "Notifying handlers" 2026-02-02 05:33:22.952175 | orchestrator | } 2026-02-02 05:33:22.952258 | orchestrator | 2026-02-02 05:33:22.952266 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 05:33:22.952273 | orchestrator | Monday 02 February 2026 05:31:51 +0000 (0:00:01.355) 0:04:16.138 ******* 2026-02-02 05:33:22.952280 | orchestrator | 2026-02-02 05:33:22.952288 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 05:33:22.952295 | orchestrator | Monday 02 February 2026 05:31:51 +0000 (0:00:00.443) 0:04:16.582 ******* 2026-02-02 05:33:22.952302 | orchestrator | 2026-02-02 05:33:22.952310 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-02 05:33:22.952336 | orchestrator | Monday 02 February 2026 05:31:52 +0000 (0:00:00.442) 0:04:17.024 ******* 2026-02-02 05:33:22.952343 | orchestrator | 2026-02-02 05:33:22.952351 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-02 05:33:22.952358 | orchestrator | Monday 02 February 2026 05:31:53 +0000 (0:00:00.806) 0:04:17.830 ******* 2026-02-02 05:33:22.952365 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:33:22.952373 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:33:22.952380 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:33:22.952387 | orchestrator | 2026-02-02 05:33:22.952394 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-02 05:33:22.952401 | orchestrator | Monday 02 February 2026 05:32:11 +0000 (0:00:17.929) 0:04:35.760 ******* 2026-02-02 05:33:22.952408 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:33:22.952415 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:33:22.952422 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:33:22.952429 | orchestrator | 2026-02-02 05:33:22.952437 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-02 05:33:22.952444 | orchestrator | Monday 02 February 2026 05:32:27 +0000 (0:00:16.896) 0:04:52.656 ******* 2026-02-02 05:33:22.952451 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-02 05:33:22.952458 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-02 05:33:22.952466 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-02 05:33:22.952475 | orchestrator | 2026-02-02 05:33:22.952482 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-02 05:33:22.952490 | orchestrator | Monday 02 February 2026 05:32:44 +0000 (0:00:16.391) 0:05:09.048 ******* 2026-02-02 05:33:22.952498 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:33:22.952505 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:33:22.952513 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:33:22.952520 | orchestrator | 2026-02-02 05:33:22.952528 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-02 05:33:22.952536 | orchestrator | Monday 02 February 2026 05:33:02 +0000 (0:00:17.967) 0:05:27.015 ******* 2026-02-02 05:33:22.952544 | orchestrator | Pausing for 5 seconds 2026-02-02 05:33:22.952553 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:33:22.952561 | orchestrator | 2026-02-02 05:33:22.952569 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-02 05:33:22.952576 | orchestrator | Monday 02 February 2026 05:33:08 +0000 (0:00:06.222) 0:05:33.239 ******* 2026-02-02 05:33:22.952584 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:33:22.952592 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:33:22.952599 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:33:22.952607 | orchestrator | 2026-02-02 05:33:22.952615 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-02 05:33:22.952623 | orchestrator | Monday 02 February 2026 05:33:10 +0000 (0:00:01.854) 0:05:35.093 ******* 2026-02-02 05:33:22.952631 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:33:22.952639 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:33:22.952647 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:33:22.952654 | orchestrator | 2026-02-02 05:33:22.952662 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-02 05:33:22.952671 | orchestrator | Monday 02 February 2026 05:33:12 +0000 (0:00:01.631) 0:05:36.725 ******* 2026-02-02 05:33:22.952679 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:33:22.952687 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:33:22.952695 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:33:22.952703 | orchestrator | 2026-02-02 05:33:22.952712 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-02 05:33:22.952719 | orchestrator | Monday 02 February 2026 05:33:13 +0000 (0:00:01.811) 0:05:38.537 ******* 2026-02-02 05:33:22.952727 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:33:22.952735 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:33:22.952743 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:33:22.952757 | orchestrator | 2026-02-02 05:33:22.952765 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-02 05:33:22.952773 | orchestrator | Monday 02 February 2026 05:33:15 +0000 (0:00:02.122) 0:05:40.659 ******* 2026-02-02 05:33:22.952781 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:33:22.952789 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:33:22.952797 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:33:22.952805 | orchestrator | 2026-02-02 05:33:22.952813 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-02 05:33:22.952838 | orchestrator | Monday 02 February 2026 05:33:17 +0000 (0:00:01.751) 0:05:42.410 ******* 2026-02-02 05:33:22.952847 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:33:22.952853 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:33:22.952860 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:33:22.952867 | orchestrator | 2026-02-02 05:33:22.952875 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-02 05:33:22.952882 | orchestrator | Monday 02 February 2026 05:33:19 +0000 (0:00:01.753) 0:05:44.164 ******* 2026-02-02 05:33:22.952890 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-02 05:33:22.952898 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-02 05:33:22.952906 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-02 05:33:22.952913 | orchestrator | 2026-02-02 05:33:22.952921 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 05:33:22.952929 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 05:33:22.952945 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-02 05:33:22.952952 | orchestrator | testbed-node-2 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-02 05:33:22.952960 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 05:33:22.952968 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 05:33:22.952975 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 05:33:22.952983 | orchestrator | 2026-02-02 05:33:22.952990 | orchestrator | 2026-02-02 05:33:22.952998 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 05:33:22.953005 | orchestrator | Monday 02 February 2026 05:33:22 +0000 (0:00:03.027) 0:05:47.192 ******* 2026-02-02 05:33:22.953012 | orchestrator | =============================================================================== 2026-02-02 05:33:22.953019 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.65s 2026-02-02 05:33:22.953027 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.67s 2026-02-02 05:33:22.953034 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.97s 2026-02-02 05:33:22.953041 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 17.93s 2026-02-02 05:33:22.953048 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.90s 2026-02-02 05:33:22.953056 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 16.39s 2026-02-02 05:33:22.953063 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.31s 2026-02-02 05:33:22.953070 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.22s 2026-02-02 05:33:22.953078 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.88s 2026-02-02 05:33:22.953086 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.25s 2026-02-02 05:33:22.953099 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 4.00s 2026-02-02 05:33:22.953106 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.59s 2026-02-02 05:33:22.953114 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.34s 2026-02-02 05:33:22.953121 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.16s 2026-02-02 05:33:22.953129 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.14s 2026-02-02 05:33:22.953136 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 3.03s 2026-02-02 05:33:22.953144 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.01s 2026-02-02 05:33:22.953151 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.99s 2026-02-02 05:33:22.953159 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.85s 2026-02-02 05:33:22.953167 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.71s 2026-02-02 05:33:23.273550 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-02 05:33:23.273668 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-02 05:33:23.273694 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-02 05:33:23.278504 | orchestrator | + set -e 2026-02-02 05:33:23.278827 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 05:33:23.278866 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 05:33:23.278888 | orchestrator | ++ INTERACTIVE=false 2026-02-02 05:33:23.278908 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 05:33:23.278926 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 05:33:23.278945 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-02 05:33:25.524535 | orchestrator | 2026-02-02 05:33:25 | INFO  | Task a6e02cf5-4caf-46d7-b8cf-51584520bf5e (ceph-rolling_update) was prepared for execution. 2026-02-02 05:33:25.524631 | orchestrator | 2026-02-02 05:33:25 | INFO  | It takes a moment until task a6e02cf5-4caf-46d7-b8cf-51584520bf5e (ceph-rolling_update) has been started and output is visible here. 2026-02-02 05:34:53.522332 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-02 05:34:53.522446 | orchestrator | 2.16.14 2026-02-02 05:34:53.522463 | orchestrator | 2026-02-02 05:34:53.522477 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-02 05:34:53.522490 | orchestrator | 2026-02-02 05:34:53.522502 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-02 05:34:53.522514 | orchestrator | Monday 02 February 2026 05:33:33 +0000 (0:00:01.555) 0:00:01.556 ******* 2026-02-02 05:34:53.522526 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-02 05:34:53.522538 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-02 05:34:53.522550 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-02 05:34:53.522562 | orchestrator | skipping: [localhost] 2026-02-02 05:34:53.522574 | orchestrator | 2026-02-02 05:34:53.522585 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-02 05:34:53.522597 | orchestrator | 2026-02-02 05:34:53.522624 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-02 05:34:53.522636 | orchestrator | Monday 02 February 2026 05:33:35 +0000 (0:00:01.844) 0:00:03.400 ******* 2026-02-02 05:34:53.522648 | orchestrator | ok: [testbed-node-0] => { 2026-02-02 05:34:53.522659 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-02 05:34:53.522671 | orchestrator | } 2026-02-02 05:34:53.522683 | orchestrator | ok: [testbed-node-1] => { 2026-02-02 05:34:53.522694 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-02 05:34:53.522706 | orchestrator | } 2026-02-02 05:34:53.522717 | orchestrator | ok: [testbed-node-2] => { 2026-02-02 05:34:53.522729 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-02 05:34:53.522771 | orchestrator | } 2026-02-02 05:34:53.522784 | orchestrator | ok: [testbed-node-3] => { 2026-02-02 05:34:53.522796 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-02 05:34:53.522807 | orchestrator | } 2026-02-02 05:34:53.522819 | orchestrator | ok: [testbed-node-4] => { 2026-02-02 05:34:53.522830 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-02 05:34:53.522844 | orchestrator | } 2026-02-02 05:34:53.522857 | orchestrator | ok: [testbed-node-5] => { 2026-02-02 05:34:53.522871 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-02 05:34:53.522883 | orchestrator | } 2026-02-02 05:34:53.522896 | orchestrator | ok: [testbed-manager] => { 2026-02-02 05:34:53.522910 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-02 05:34:53.522923 | orchestrator | } 2026-02-02 05:34:53.522936 | orchestrator | 2026-02-02 05:34:53.522950 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-02 05:34:53.522963 | orchestrator | Monday 02 February 2026 05:33:41 +0000 (0:00:06.160) 0:00:09.561 ******* 2026-02-02 05:34:53.522976 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:34:53.522988 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:34:53.523000 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:34:53.523013 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:34:53.523026 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:34:53.523038 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:34:53.523050 | orchestrator | ok: [testbed-manager] 2026-02-02 05:34:53.523062 | orchestrator | 2026-02-02 05:34:53.523076 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-02 05:34:53.523089 | orchestrator | Monday 02 February 2026 05:33:49 +0000 (0:00:07.205) 0:00:16.766 ******* 2026-02-02 05:34:53.523102 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 05:34:53.523114 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 05:34:53.523127 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:34:53.523140 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:34:53.523153 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 05:34:53.523165 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:34:53.523177 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 05:34:53.523190 | orchestrator | 2026-02-02 05:34:53.523202 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-02 05:34:53.523212 | orchestrator | Monday 02 February 2026 05:34:22 +0000 (0:00:33.167) 0:00:49.934 ******* 2026-02-02 05:34:53.523223 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:34:53.523314 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:34:53.523327 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:34:53.523337 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:34:53.523348 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:34:53.523359 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:34:53.523369 | orchestrator | ok: [testbed-manager] 2026-02-02 05:34:53.523380 | orchestrator | 2026-02-02 05:34:53.523391 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 05:34:53.523401 | orchestrator | Monday 02 February 2026 05:34:24 +0000 (0:00:02.206) 0:00:52.140 ******* 2026-02-02 05:34:53.523413 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-02 05:34:53.523426 | orchestrator | 2026-02-02 05:34:53.523437 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 05:34:53.523448 | orchestrator | Monday 02 February 2026 05:34:27 +0000 (0:00:03.087) 0:00:55.228 ******* 2026-02-02 05:34:53.523458 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:34:53.523478 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:34:53.523489 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:34:53.523499 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:34:53.523510 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:34:53.523520 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:34:53.523531 | orchestrator | ok: [testbed-manager] 2026-02-02 05:34:53.523542 | orchestrator | 2026-02-02 05:34:53.523571 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 05:34:53.523583 | orchestrator | Monday 02 February 2026 05:34:30 +0000 (0:00:02.489) 0:00:57.717 ******* 2026-02-02 05:34:53.523593 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:34:53.523604 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:34:53.523614 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:34:53.523625 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:34:53.523634 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:34:53.523644 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:34:53.523653 | orchestrator | ok: [testbed-manager] 2026-02-02 05:34:53.523663 | orchestrator | 2026-02-02 05:34:53.523672 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 05:34:53.523682 | orchestrator | Monday 02 February 2026 05:34:32 +0000 (0:00:01.963) 0:00:59.681 ******* 2026-02-02 05:34:53.523691 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:34:53.523701 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:34:53.523710 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:34:53.523719 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:34:53.523728 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:34:53.523738 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:34:53.523747 | orchestrator | ok: [testbed-manager] 2026-02-02 05:34:53.523757 | orchestrator | 2026-02-02 05:34:53.523772 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 05:34:53.523782 | orchestrator | Monday 02 February 2026 05:34:34 +0000 (0:00:02.446) 0:01:02.127 ******* 2026-02-02 05:34:53.523791 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:34:53.523800 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:34:53.523810 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:34:53.523819 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:34:53.523828 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:34:53.523838 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:34:53.523847 | orchestrator | ok: [testbed-manager] 2026-02-02 05:34:53.523856 | orchestrator | 2026-02-02 05:34:53.523866 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 05:34:53.523876 | orchestrator | Monday 02 February 2026 05:34:36 +0000 (0:00:01.903) 0:01:04.031 ******* 2026-02-02 05:34:53.523885 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:34:53.523894 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:34:53.523904 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:34:53.523913 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:34:53.523922 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:34:53.523931 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:34:53.523941 | orchestrator | ok: [testbed-manager] 2026-02-02 05:34:53.523950 | orchestrator | 2026-02-02 05:34:53.523960 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 05:34:53.523969 | orchestrator | Monday 02 February 2026 05:34:38 +0000 (0:00:02.022) 0:01:06.053 ******* 2026-02-02 05:34:53.523979 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:34:53.523988 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:34:53.523998 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:34:53.524007 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:34:53.524016 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:34:53.524026 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:34:53.524035 | orchestrator | ok: [testbed-manager] 2026-02-02 05:34:53.524044 | orchestrator | 2026-02-02 05:34:53.524054 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 05:34:53.524064 | orchestrator | Monday 02 February 2026 05:34:40 +0000 (0:00:02.025) 0:01:08.079 ******* 2026-02-02 05:34:53.524073 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:34:53.524089 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:34:53.524098 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:34:53.524107 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:34:53.524117 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:34:53.524126 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:34:53.524136 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:34:53.524145 | orchestrator | 2026-02-02 05:34:53.524155 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 05:34:53.524164 | orchestrator | Monday 02 February 2026 05:34:42 +0000 (0:00:02.302) 0:01:10.382 ******* 2026-02-02 05:34:53.524174 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:34:53.524183 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:34:53.524192 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:34:53.524202 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:34:53.524211 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:34:53.524221 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:34:53.524249 | orchestrator | ok: [testbed-manager] 2026-02-02 05:34:53.524259 | orchestrator | 2026-02-02 05:34:53.524269 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 05:34:53.524279 | orchestrator | Monday 02 February 2026 05:34:44 +0000 (0:00:01.920) 0:01:12.302 ******* 2026-02-02 05:34:53.524288 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:34:53.524298 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:34:53.524307 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:34:53.524317 | orchestrator | 2026-02-02 05:34:53.524327 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 05:34:53.524336 | orchestrator | Monday 02 February 2026 05:34:46 +0000 (0:00:01.707) 0:01:14.010 ******* 2026-02-02 05:34:53.524346 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:34:53.524355 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:34:53.524364 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:34:53.524374 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:34:53.524383 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:34:53.524392 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:34:53.524423 | orchestrator | ok: [testbed-manager] 2026-02-02 05:34:53.524433 | orchestrator | 2026-02-02 05:34:53.524442 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 05:34:53.524452 | orchestrator | Monday 02 February 2026 05:34:48 +0000 (0:00:02.505) 0:01:16.516 ******* 2026-02-02 05:34:53.524461 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:34:53.524471 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:34:53.524480 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:34:53.524490 | orchestrator | 2026-02-02 05:34:53.524500 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 05:34:53.524509 | orchestrator | Monday 02 February 2026 05:34:52 +0000 (0:00:03.172) 0:01:19.688 ******* 2026-02-02 05:34:53.524524 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 05:35:15.883319 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 05:35:15.883462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 05:35:15.883489 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:15.883508 | orchestrator | 2026-02-02 05:35:15.883528 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 05:35:15.883548 | orchestrator | Monday 02 February 2026 05:34:53 +0000 (0:00:01.403) 0:01:21.092 ******* 2026-02-02 05:35:15.883567 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 05:35:15.883612 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 05:35:15.883663 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 05:35:15.883682 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:15.883700 | orchestrator | 2026-02-02 05:35:15.883719 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 05:35:15.883737 | orchestrator | Monday 02 February 2026 05:34:55 +0000 (0:00:01.950) 0:01:23.042 ******* 2026-02-02 05:35:15.883759 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:15.883780 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:15.883800 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:15.883821 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:15.883840 | orchestrator | 2026-02-02 05:35:15.883859 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 05:35:15.883879 | orchestrator | Monday 02 February 2026 05:34:56 +0000 (0:00:01.174) 0:01:24.217 ******* 2026-02-02 05:35:15.883900 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'fef826d0639c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 05:34:49.548862', 'end': '2026-02-02 05:34:49.600712', 'delta': '0:00:00.051850', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fef826d0639c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 05:35:15.883955 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a42e682d4965', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 05:34:50.364791', 'end': '2026-02-02 05:34:50.408344', 'delta': '0:00:00.043553', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a42e682d4965'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 05:35:15.883979 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '39d29fabc2d2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 05:34:50.876547', 'end': '2026-02-02 05:34:50.914676', 'delta': '0:00:00.038129', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['39d29fabc2d2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 05:35:15.884014 | orchestrator | 2026-02-02 05:35:15.884033 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 05:35:15.884051 | orchestrator | Monday 02 February 2026 05:34:57 +0000 (0:00:01.199) 0:01:25.416 ******* 2026-02-02 05:35:15.884070 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:35:15.884091 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:35:15.884110 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:35:15.884129 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:35:15.884147 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:35:15.884164 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:35:15.884183 | orchestrator | ok: [testbed-manager] 2026-02-02 05:35:15.884203 | orchestrator | 2026-02-02 05:35:15.884221 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 05:35:15.884267 | orchestrator | Monday 02 February 2026 05:34:59 +0000 (0:00:01.997) 0:01:27.414 ******* 2026-02-02 05:35:15.884288 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:15.884306 | orchestrator | 2026-02-02 05:35:15.884325 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 05:35:15.884344 | orchestrator | Monday 02 February 2026 05:35:01 +0000 (0:00:01.300) 0:01:28.715 ******* 2026-02-02 05:35:15.884363 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:35:15.884381 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:35:15.884398 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:35:15.884416 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:35:15.884435 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:35:15.884454 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:35:15.884472 | orchestrator | ok: [testbed-manager] 2026-02-02 05:35:15.884490 | orchestrator | 2026-02-02 05:35:15.884507 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 05:35:15.884526 | orchestrator | Monday 02 February 2026 05:35:03 +0000 (0:00:02.109) 0:01:30.824 ******* 2026-02-02 05:35:15.884544 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:35:15.884563 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-02 05:35:15.884582 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-02 05:35:15.884600 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-02 05:35:15.884618 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 05:35:15.884636 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 05:35:15.884653 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-02 05:35:15.884671 | orchestrator | 2026-02-02 05:35:15.884689 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 05:35:15.884707 | orchestrator | Monday 02 February 2026 05:35:06 +0000 (0:00:03.519) 0:01:34.344 ******* 2026-02-02 05:35:15.884725 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:35:15.884745 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:35:15.884764 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:35:15.884782 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:35:15.884799 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:35:15.884817 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:35:15.884835 | orchestrator | ok: [testbed-manager] 2026-02-02 05:35:15.884853 | orchestrator | 2026-02-02 05:35:15.884871 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 05:35:15.884900 | orchestrator | Monday 02 February 2026 05:35:08 +0000 (0:00:02.088) 0:01:36.433 ******* 2026-02-02 05:35:15.884919 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:15.884937 | orchestrator | 2026-02-02 05:35:15.884956 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 05:35:15.884974 | orchestrator | Monday 02 February 2026 05:35:09 +0000 (0:00:01.145) 0:01:37.578 ******* 2026-02-02 05:35:15.884992 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:15.885010 | orchestrator | 2026-02-02 05:35:15.885030 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 05:35:15.885049 | orchestrator | Monday 02 February 2026 05:35:11 +0000 (0:00:01.224) 0:01:38.803 ******* 2026-02-02 05:35:15.885066 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:15.885085 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:35:15.885096 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:35:15.885107 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:35:15.885163 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:35:15.885175 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:35:15.885186 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:35:15.885197 | orchestrator | 2026-02-02 05:35:15.885208 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 05:35:15.885219 | orchestrator | Monday 02 February 2026 05:35:13 +0000 (0:00:02.590) 0:01:41.394 ******* 2026-02-02 05:35:15.885229 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:15.885274 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:35:15.885286 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:35:15.885297 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:35:15.885307 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:35:15.885317 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:35:15.885336 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:35:26.897680 | orchestrator | 2026-02-02 05:35:26.897799 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 05:35:26.897817 | orchestrator | Monday 02 February 2026 05:35:15 +0000 (0:00:02.061) 0:01:43.455 ******* 2026-02-02 05:35:26.897829 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:26.897841 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:35:26.897853 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:35:26.897864 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:35:26.897875 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:35:26.897885 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:35:26.897896 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:35:26.897907 | orchestrator | 2026-02-02 05:35:26.897919 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 05:35:26.897929 | orchestrator | Monday 02 February 2026 05:35:18 +0000 (0:00:02.301) 0:01:45.757 ******* 2026-02-02 05:35:26.897940 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:26.897951 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:35:26.897978 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:35:26.897989 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:35:26.898000 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:35:26.898012 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:35:26.898084 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:35:26.898096 | orchestrator | 2026-02-02 05:35:26.898107 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 05:35:26.898118 | orchestrator | Monday 02 February 2026 05:35:20 +0000 (0:00:02.090) 0:01:47.848 ******* 2026-02-02 05:35:26.898129 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:26.898140 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:35:26.898151 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:35:26.898162 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:35:26.898172 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:35:26.898183 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:35:26.898194 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:35:26.898226 | orchestrator | 2026-02-02 05:35:26.898240 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 05:35:26.898274 | orchestrator | Monday 02 February 2026 05:35:22 +0000 (0:00:02.177) 0:01:50.025 ******* 2026-02-02 05:35:26.898287 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:26.898299 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:35:26.898311 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:35:26.898324 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:35:26.898336 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:35:26.898349 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:35:26.898361 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:35:26.898372 | orchestrator | 2026-02-02 05:35:26.898383 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 05:35:26.898394 | orchestrator | Monday 02 February 2026 05:35:24 +0000 (0:00:01.913) 0:01:51.938 ******* 2026-02-02 05:35:26.898405 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:26.898416 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:35:26.898426 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:35:26.898437 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:35:26.898447 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:35:26.898458 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:35:26.898469 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:35:26.898479 | orchestrator | 2026-02-02 05:35:26.898490 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 05:35:26.898501 | orchestrator | Monday 02 February 2026 05:35:26 +0000 (0:00:02.285) 0:01:54.224 ******* 2026-02-02 05:35:26.898514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:26.898530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:26.898541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:26.898572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 05:35:26.898587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:26.898612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:26.898623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:26.898638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91f9e36e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:35:26.898652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:26.898671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.115048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.115191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.115209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.115224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 05:35:27.115240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.115288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.115300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.115343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2343887', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:35:27.115366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.115378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.115390 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:27.115403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.115414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.115426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.115437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 05:35:27.115463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.403475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.403575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.403602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0dc97797', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:35:27.403627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.403667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.403679 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:35:27.403708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.403726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a', 'dm-uuid-LVM-nQNI9mGSypmWJN7Kribh0RNL5qLQKFSceYxT4mfzBYfoYiha3ZzoEdYR0rTnnIvK'], 'uuids': ['a78e3f4b-723a-42a3-abd4-4d699a55c416'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK']}})  2026-02-02 05:35:27.403739 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:35:27.403750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6', 'scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c15f901f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:35:27.403762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HOxmXw-N5cX-V1Nz-Lu3r-OQk9-N5gG-1syyTi', 'scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4', 'scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379']}})  2026-02-02 05:35:27.403773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.403783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.403800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 05:35:27.403819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.458532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO', 'dm-uuid-CRYPT-LUKS2-8edeb25f170042ba8e6d0505727d2968-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 05:35:27.458633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.458652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379', 'dm-uuid-LVM-2Xx1rXy8ZvvzVeymXUM2Y23jmTeKUn30gyH8a84MHrJn7bcz7phSu8LEA3bm3DqO'], 'uuids': ['8edeb25f-1700-42ba-8e6d-0505727d2968'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO']}})  2026-02-02 05:35:27.458666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yf6lEa-f3nO-iewk-DEDy-Fb6j-Kq2P-dbkgMf', 'scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc', 'scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a']}})  2026-02-02 05:35:27.458678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.458719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.458765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2944b273', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:35:27.458781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.458793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19', 'dm-uuid-LVM-7fojGdQjjxzlZ1d67G3lfXV0uQvvNrpG74l8TP6AWG5LY1LTlUkEVjmQPc2hTMkL'], 'uuids': ['0037b285-4ac2-45c2-8d5f-985073fa4cde'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL']}})  2026-02-02 05:35:27.458805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.458824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012', 'scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '076229ff', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:35:27.458850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK', 'dm-uuid-CRYPT-LUKS2-a78e3f4b723a42a3abd44d699a55c416-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 05:35:27.655015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AITawh-CkpC-7L3c-Vqqe-GXUP-7eEh-WwcXRH', 'scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5', 'scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89']}})  2026-02-02 05:35:27.655127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.655142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.655150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 05:35:27.655158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.655188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom', 'dm-uuid-CRYPT-LUKS2-6399826b15f3492994c0bc4d1d3bf1c1-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 05:35:27.655297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.655341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89', 'dm-uuid-LVM-bGXwDmNnGJLl15xDO66UDgeGoDbpg8C0HvMSdsO6YcSLb4aDqGATNEcOudg8iQom'], 'uuids': ['6399826b-15f3-4929-94c0-bc4d1d3bf1c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom']}})  2026-02-02 05:35:27.655352 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:35:27.655360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QbZaLy-yUYT-ccut-PcI7-2pGL-9PmJ-6NoPFr', 'scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28', 'scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19']}})  2026-02-02 05:35:27.655368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.655378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d8209b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:35:27.655406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.776929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.777009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL', 'dm-uuid-CRYPT-LUKS2-0037b2854ac245c28d5f985073fa4cde-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 05:35:27.777021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.777029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6', 'dm-uuid-LVM-o4NjfQidgd0d8Dt2ERSF2CVjMcc1iNdF2FL70XUBfeOz8qjNKOcDK13w6fcJ9Hta'], 'uuids': ['7d002011-c2d2-4478-8516-4cfbbdeaec0b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta']}})  2026-02-02 05:35:27.777056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359', 'scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e969e129', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:35:27.777065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qjdzC2-uhmD-TpwQ-o3eu-AERk-xIpn-IuLEqz', 'scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40', 'scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f']}})  2026-02-02 05:35:27.777072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.777102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.777110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 05:35:27.777117 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:35:27.777125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.777131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs', 'dm-uuid-CRYPT-LUKS2-756889fb99344894803ed86e669bebbd-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 05:35:27.777143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:27.777149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f', 'dm-uuid-LVM-oyVS0lpzZeiZxxmfRvad67kbexmRBG5IWJAtRWtNBygZ9yUEjcaaQoSOl1TBvsQs'], 'uuids': ['756889fb-9934-4894-803e-d86e669bebbd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs']}})  2026-02-02 05:35:27.777156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-etyEN7-O4pu-QliJ-NKxv-0HLx-jIcx-JGZ0d7', 'scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b', 'scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6']}})  2026-02-02 05:35:27.777172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:29.038430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2a7e3dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:35:29.038521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:29.038531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:29.038537 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:29.038553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta', 'dm-uuid-CRYPT-LUKS2-7d002011c2d2447885164cfbbdeaec0b-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 05:35:29.038571 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:29.038576 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:35:29.038582 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:29.038587 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 05:35:29.038596 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:29.038601 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:29.038606 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:29.038619 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b', 'scsi-SQEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '212ed843', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part16', 'scsi-SQEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part14', 'scsi-SQEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part15', 'scsi-SQEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part1', 'scsi-SQEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:35:29.175489 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:29.175604 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:35:29.175619 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:35:29.175632 | orchestrator | 2026-02-02 05:35:29.175643 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 05:35:29.175654 | orchestrator | Monday 02 February 2026 05:35:29 +0000 (0:00:02.382) 0:01:56.607 ******* 2026-02-02 05:35:29.175666 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.175679 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.175689 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.175713 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.175742 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.175759 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.175769 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.175788 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91f9e36e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.175808 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.319032 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.319110 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.319121 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.319129 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.319150 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.319159 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.319203 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.319212 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.319226 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2343887', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.319236 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.319292 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.807337 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:29.807461 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.807488 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.807507 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.807566 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.807603 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.807649 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.807696 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.807728 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0dc97797', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.807758 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.807775 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.807794 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:35:29.807812 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:35:29.807840 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.948469 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a', 'dm-uuid-LVM-nQNI9mGSypmWJN7Kribh0RNL5qLQKFSceYxT4mfzBYfoYiha3ZzoEdYR0rTnnIvK'], 'uuids': ['a78e3f4b-723a-42a3-abd4-4d699a55c416'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK']}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.948573 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6', 'scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c15f901f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.948605 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.948648 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19', 'dm-uuid-LVM-7fojGdQjjxzlZ1d67G3lfXV0uQvvNrpG74l8TP6AWG5LY1LTlUkEVjmQPc2hTMkL'], 'uuids': ['0037b285-4ac2-45c2-8d5f-985073fa4cde'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL']}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.948661 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012', 'scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '076229ff', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.948691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HOxmXw-N5cX-V1Nz-Lu3r-OQk9-N5gG-1syyTi', 'scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4', 'scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379']}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.948714 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AITawh-CkpC-7L3c-Vqqe-GXUP-7eEh-WwcXRH', 'scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5', 'scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89']}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.948735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.948746 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.948758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:29.948778 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.012168 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.012296 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO', 'dm-uuid-CRYPT-LUKS2-8edeb25f170042ba8e6d0505727d2968-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.012330 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.012340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.012348 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.012375 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379', 'dm-uuid-LVM-2Xx1rXy8ZvvzVeymXUM2Y23jmTeKUn30gyH8a84MHrJn7bcz7phSu8LEA3bm3DqO'], 'uuids': ['8edeb25f-1700-42ba-8e6d-0505727d2968'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO']}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.012389 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.012420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yf6lEa-f3nO-iewk-DEDy-Fb6j-Kq2P-dbkgMf', 'scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc', 'scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a']}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.012450 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6', 'dm-uuid-LVM-o4NjfQidgd0d8Dt2ERSF2CVjMcc1iNdF2FL70XUBfeOz8qjNKOcDK13w6fcJ9Hta'], 'uuids': ['7d002011-c2d2-4478-8516-4cfbbdeaec0b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta']}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.012459 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.012491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2944b273', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.093156 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.093284 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.093297 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359', 'scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e969e129', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.093305 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.093312 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK', 'dm-uuid-CRYPT-LUKS2-a78e3f4b723a42a3abd44d699a55c416-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.093361 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qjdzC2-uhmD-TpwQ-o3eu-AERk-xIpn-IuLEqz', 'scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40', 'scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f']}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.093372 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom', 'dm-uuid-CRYPT-LUKS2-6399826b15f3492994c0bc4d1d3bf1c1-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.093379 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.093386 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.093392 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.093400 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89', 'dm-uuid-LVM-bGXwDmNnGJLl15xDO66UDgeGoDbpg8C0HvMSdsO6YcSLb4aDqGATNEcOudg8iQom'], 'uuids': ['6399826b-15f3-4929-94c0-bc4d1d3bf1c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom']}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.093422 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.163834 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QbZaLy-yUYT-ccut-PcI7-2pGL-9PmJ-6NoPFr', 'scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28', 'scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19']}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.163957 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.163981 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:35:30.164077 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.164091 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs', 'dm-uuid-CRYPT-LUKS2-756889fb99344894803ed86e669bebbd-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.164165 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d8209b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.164179 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.164189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.164207 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.164223 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.164242 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.237582 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.237680 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f', 'dm-uuid-LVM-oyVS0lpzZeiZxxmfRvad67kbexmRBG5IWJAtRWtNBygZ9yUEjcaaQoSOl1TBvsQs'], 'uuids': ['756889fb-9934-4894-803e-d86e669bebbd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs']}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.237697 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.237732 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.237758 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.237787 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL', 'dm-uuid-CRYPT-LUKS2-0037b2854ac245c28d5f985073fa4cde-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.237800 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-etyEN7-O4pu-QliJ-NKxv-0HLx-jIcx-JGZ0d7', 'scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b', 'scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6']}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.237815 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:30.237827 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:35:30.237870 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b', 'scsi-SQEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '212ed843', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part16', 'scsi-SQEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part14', 'scsi-SQEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part15', 'scsi-SQEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part1', 'scsi-SQEMU_QEMU_HARDDISK_212ed843-6edd-4565-8465-188b3268426b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:43.694072 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:43.694192 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:43.694211 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:43.694320 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2a7e3dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:43.694340 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:35:43.694422 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:43.694439 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:43.694451 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta', 'dm-uuid-CRYPT-LUKS2-7d002011c2d2447885164cfbbdeaec0b-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:35:43.694474 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:35:43.694485 | orchestrator | 2026-02-02 05:35:43.694498 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 05:35:43.694511 | orchestrator | Monday 02 February 2026 05:35:31 +0000 (0:00:02.385) 0:01:58.993 ******* 2026-02-02 05:35:43.694558 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:35:43.694573 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:35:43.694586 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:35:43.694598 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:35:43.694611 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:35:43.694623 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:35:43.694635 | orchestrator | ok: [testbed-manager] 2026-02-02 05:35:43.694647 | orchestrator | 2026-02-02 05:35:43.694660 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 05:35:43.694672 | orchestrator | Monday 02 February 2026 05:35:33 +0000 (0:00:02.491) 0:02:01.484 ******* 2026-02-02 05:35:43.694685 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:35:43.694704 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:35:43.694724 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:35:43.694746 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:35:43.694762 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:35:43.694775 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:35:43.694800 | orchestrator | ok: [testbed-manager] 2026-02-02 05:35:43.694812 | orchestrator | 2026-02-02 05:35:43.694825 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 05:35:43.694837 | orchestrator | Monday 02 February 2026 05:35:35 +0000 (0:00:01.852) 0:02:03.337 ******* 2026-02-02 05:35:43.694850 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:35:43.694863 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:35:43.694883 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:35:43.694894 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:35:43.694905 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:35:43.694916 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:35:43.694926 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:35:43.694937 | orchestrator | 2026-02-02 05:35:43.694948 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 05:35:43.694959 | orchestrator | Monday 02 February 2026 05:35:38 +0000 (0:00:02.415) 0:02:05.752 ******* 2026-02-02 05:35:43.694970 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:43.694989 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:35:43.695009 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:35:43.695029 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:35:43.695045 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:35:43.695063 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:35:43.695081 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:35:43.695098 | orchestrator | 2026-02-02 05:35:43.695115 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 05:35:43.695134 | orchestrator | Monday 02 February 2026 05:35:40 +0000 (0:00:01.839) 0:02:07.591 ******* 2026-02-02 05:35:43.695150 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:35:43.695162 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:35:43.695172 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:35:43.695183 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:35:43.695193 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:35:43.695215 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:12.700639 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-02 05:36:12.700920 | orchestrator | 2026-02-02 05:36:12.700960 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 05:36:12.700984 | orchestrator | Monday 02 February 2026 05:35:43 +0000 (0:00:03.671) 0:02:11.263 ******* 2026-02-02 05:36:12.701002 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:12.701022 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:12.701041 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:12.701060 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:12.701079 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:12.701098 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:12.701114 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:12.701125 | orchestrator | 2026-02-02 05:36:12.701136 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 05:36:12.701148 | orchestrator | Monday 02 February 2026 05:35:45 +0000 (0:00:02.071) 0:02:13.335 ******* 2026-02-02 05:36:12.701159 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:36:12.701171 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-02 05:36:12.701184 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-02 05:36:12.701196 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-02 05:36:12.701209 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-02 05:36:12.701222 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 05:36:12.701234 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-02 05:36:12.701247 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-02 05:36:12.701259 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 05:36:12.701272 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-02 05:36:12.701307 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-02 05:36:12.701320 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-02 05:36:12.701333 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-02 05:36:12.701345 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-02 05:36:12.701357 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-02 05:36:12.701370 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-02 05:36:12.701382 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-02 05:36:12.701394 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-02 05:36:12.701406 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-02 05:36:12.701418 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-02 05:36:12.701430 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-02 05:36:12.701443 | orchestrator | 2026-02-02 05:36:12.701455 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 05:36:12.701467 | orchestrator | Monday 02 February 2026 05:35:49 +0000 (0:00:03.376) 0:02:16.712 ******* 2026-02-02 05:36:12.701479 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 05:36:12.701491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 05:36:12.701503 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 05:36:12.701515 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:12.701528 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 05:36:12.701540 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 05:36:12.701551 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 05:36:12.701562 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:12.701572 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 05:36:12.701583 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 05:36:12.701594 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 05:36:12.701630 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:12.701641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 05:36:12.701652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 05:36:12.701662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 05:36:12.701673 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:12.701684 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-02 05:36:12.701694 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-02 05:36:12.701705 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-02 05:36:12.701730 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:12.701741 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-02 05:36:12.701752 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-02 05:36:12.701762 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-02 05:36:12.701773 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:12.701786 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-02 05:36:12.701805 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-02 05:36:12.701822 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-02 05:36:12.701840 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:12.701858 | orchestrator | 2026-02-02 05:36:12.701875 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 05:36:12.701894 | orchestrator | Monday 02 February 2026 05:35:51 +0000 (0:00:02.152) 0:02:18.864 ******* 2026-02-02 05:36:12.701911 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:12.701926 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:12.701944 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:12.702088 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:12.702114 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 05:36:12.702135 | orchestrator | 2026-02-02 05:36:12.702196 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 05:36:12.702211 | orchestrator | Monday 02 February 2026 05:35:53 +0000 (0:00:01.899) 0:02:20.764 ******* 2026-02-02 05:36:12.702222 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:12.702233 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:12.702243 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:12.702254 | orchestrator | 2026-02-02 05:36:12.702265 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 05:36:12.702331 | orchestrator | Monday 02 February 2026 05:35:54 +0000 (0:00:01.580) 0:02:22.344 ******* 2026-02-02 05:36:12.702343 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:12.702354 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:12.702365 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:12.702376 | orchestrator | 2026-02-02 05:36:12.702387 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 05:36:12.702398 | orchestrator | Monday 02 February 2026 05:35:56 +0000 (0:00:01.391) 0:02:23.735 ******* 2026-02-02 05:36:12.702408 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:12.702419 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:12.702430 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:12.702441 | orchestrator | 2026-02-02 05:36:12.702451 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 05:36:12.702462 | orchestrator | Monday 02 February 2026 05:35:57 +0000 (0:00:01.363) 0:02:25.099 ******* 2026-02-02 05:36:12.702473 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:36:12.702484 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:36:12.702495 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:36:12.702505 | orchestrator | 2026-02-02 05:36:12.702517 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 05:36:12.702558 | orchestrator | Monday 02 February 2026 05:35:59 +0000 (0:00:01.524) 0:02:26.624 ******* 2026-02-02 05:36:12.702578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 05:36:12.702598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 05:36:12.702615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 05:36:12.702633 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:12.702644 | orchestrator | 2026-02-02 05:36:12.702655 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 05:36:12.702666 | orchestrator | Monday 02 February 2026 05:36:00 +0000 (0:00:01.843) 0:02:28.467 ******* 2026-02-02 05:36:12.702676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 05:36:12.702687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 05:36:12.702698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 05:36:12.702708 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:12.702719 | orchestrator | 2026-02-02 05:36:12.702730 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 05:36:12.702740 | orchestrator | Monday 02 February 2026 05:36:02 +0000 (0:00:01.688) 0:02:30.156 ******* 2026-02-02 05:36:12.702751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 05:36:12.702761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 05:36:12.702772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 05:36:12.702782 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:12.702793 | orchestrator | 2026-02-02 05:36:12.702804 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 05:36:12.702815 | orchestrator | Monday 02 February 2026 05:36:04 +0000 (0:00:01.841) 0:02:31.997 ******* 2026-02-02 05:36:12.702825 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:36:12.702836 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:36:12.702846 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:36:12.702857 | orchestrator | 2026-02-02 05:36:12.702868 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 05:36:12.702878 | orchestrator | Monday 02 February 2026 05:36:05 +0000 (0:00:01.501) 0:02:33.499 ******* 2026-02-02 05:36:12.702889 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 05:36:12.702900 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-02 05:36:12.702910 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-02 05:36:12.702921 | orchestrator | 2026-02-02 05:36:12.702931 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 05:36:12.702942 | orchestrator | Monday 02 February 2026 05:36:07 +0000 (0:00:01.646) 0:02:35.145 ******* 2026-02-02 05:36:12.702952 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:36:12.702971 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:36:12.702982 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:36:12.702993 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 05:36:12.703004 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 05:36:12.703014 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 05:36:12.703025 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 05:36:12.703036 | orchestrator | 2026-02-02 05:36:12.703047 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 05:36:12.703057 | orchestrator | Monday 02 February 2026 05:36:09 +0000 (0:00:02.067) 0:02:37.213 ******* 2026-02-02 05:36:12.703068 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:36:12.703078 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:36:12.703097 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:36:12.703118 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 05:36:59.847433 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 05:36:59.847517 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 05:36:59.847523 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 05:36:59.847529 | orchestrator | 2026-02-02 05:36:59.847534 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-02 05:36:59.847539 | orchestrator | Monday 02 February 2026 05:36:12 +0000 (0:00:03.049) 0:02:40.262 ******* 2026-02-02 05:36:59.847543 | orchestrator | changed: [testbed-node-4] 2026-02-02 05:36:59.847549 | orchestrator | changed: [testbed-node-3] 2026-02-02 05:36:59.847552 | orchestrator | changed: [testbed-node-5] 2026-02-02 05:36:59.847556 | orchestrator | changed: [testbed-manager] 2026-02-02 05:36:59.847560 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:36:59.847564 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:36:59.847567 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:36:59.847571 | orchestrator | 2026-02-02 05:36:59.847575 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-02 05:36:59.847579 | orchestrator | Monday 02 February 2026 05:36:24 +0000 (0:00:11.406) 0:02:51.669 ******* 2026-02-02 05:36:59.847583 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.847586 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.847590 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.847594 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.847598 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.847602 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.847606 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.847610 | orchestrator | 2026-02-02 05:36:59.847614 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-02 05:36:59.847617 | orchestrator | Monday 02 February 2026 05:36:26 +0000 (0:00:02.164) 0:02:53.833 ******* 2026-02-02 05:36:59.847621 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.847625 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.847628 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.847632 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.847636 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.847640 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.847643 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.847647 | orchestrator | 2026-02-02 05:36:59.847651 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-02 05:36:59.847655 | orchestrator | Monday 02 February 2026 05:36:28 +0000 (0:00:01.856) 0:02:55.689 ******* 2026-02-02 05:36:59.847658 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.847662 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:36:59.847666 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:36:59.847670 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:36:59.847673 | orchestrator | changed: [testbed-node-3] 2026-02-02 05:36:59.847677 | orchestrator | changed: [testbed-node-4] 2026-02-02 05:36:59.847681 | orchestrator | changed: [testbed-node-5] 2026-02-02 05:36:59.847685 | orchestrator | 2026-02-02 05:36:59.847688 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-02 05:36:59.847692 | orchestrator | Monday 02 February 2026 05:36:31 +0000 (0:00:03.057) 0:02:58.746 ******* 2026-02-02 05:36:59.847697 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-02 05:36:59.847702 | orchestrator | 2026-02-02 05:36:59.847706 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-02 05:36:59.847710 | orchestrator | Monday 02 February 2026 05:36:34 +0000 (0:00:02.959) 0:03:01.706 ******* 2026-02-02 05:36:59.847728 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.847732 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.847736 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.847740 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.847744 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.847747 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.847751 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.847755 | orchestrator | 2026-02-02 05:36:59.847758 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-02 05:36:59.847762 | orchestrator | Monday 02 February 2026 05:36:36 +0000 (0:00:01.882) 0:03:03.589 ******* 2026-02-02 05:36:59.847766 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.847770 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.847773 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.847777 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.847781 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.847785 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.847788 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.847792 | orchestrator | 2026-02-02 05:36:59.847796 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-02 05:36:59.847799 | orchestrator | Monday 02 February 2026 05:36:38 +0000 (0:00:02.019) 0:03:05.608 ******* 2026-02-02 05:36:59.847803 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.847807 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.847811 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.847814 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.847818 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.847822 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.847826 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.847829 | orchestrator | 2026-02-02 05:36:59.847833 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-02 05:36:59.847837 | orchestrator | Monday 02 February 2026 05:36:39 +0000 (0:00:01.904) 0:03:07.513 ******* 2026-02-02 05:36:59.847841 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.847876 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.847880 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.847884 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.847888 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.847891 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.847895 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.847899 | orchestrator | 2026-02-02 05:36:59.847913 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-02 05:36:59.847917 | orchestrator | Monday 02 February 2026 05:36:42 +0000 (0:00:02.217) 0:03:09.730 ******* 2026-02-02 05:36:59.847921 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.847925 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.847929 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.847932 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.847936 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.847940 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.847943 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.847947 | orchestrator | 2026-02-02 05:36:59.847951 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-02 05:36:59.847955 | orchestrator | Monday 02 February 2026 05:36:44 +0000 (0:00:01.925) 0:03:11.656 ******* 2026-02-02 05:36:59.847958 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.847962 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.847966 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.847970 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.847974 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.847978 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.847983 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.847991 | orchestrator | 2026-02-02 05:36:59.847996 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-02 05:36:59.848000 | orchestrator | Monday 02 February 2026 05:36:46 +0000 (0:00:02.246) 0:03:13.903 ******* 2026-02-02 05:36:59.848004 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.848009 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.848013 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.848017 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.848021 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.848026 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.848030 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.848034 | orchestrator | 2026-02-02 05:36:59.848039 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-02 05:36:59.848043 | orchestrator | Monday 02 February 2026 05:36:48 +0000 (0:00:02.044) 0:03:15.947 ******* 2026-02-02 05:36:59.848047 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.848052 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.848056 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.848060 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.848064 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.848069 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.848073 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.848077 | orchestrator | 2026-02-02 05:36:59.848081 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-02 05:36:59.848086 | orchestrator | Monday 02 February 2026 05:36:50 +0000 (0:00:02.190) 0:03:18.138 ******* 2026-02-02 05:36:59.848090 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.848094 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.848098 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.848103 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.848107 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.848111 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.848116 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.848120 | orchestrator | 2026-02-02 05:36:59.848124 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-02 05:36:59.848129 | orchestrator | Monday 02 February 2026 05:36:52 +0000 (0:00:02.198) 0:03:20.337 ******* 2026-02-02 05:36:59.848133 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.848137 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.848141 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.848148 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.848155 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.848161 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.848167 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.848175 | orchestrator | 2026-02-02 05:36:59.848182 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-02 05:36:59.848189 | orchestrator | Monday 02 February 2026 05:36:54 +0000 (0:00:01.933) 0:03:22.270 ******* 2026-02-02 05:36:59.848197 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.848204 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.848211 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.848217 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.848224 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.848231 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.848237 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.848243 | orchestrator | 2026-02-02 05:36:59.848250 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-02 05:36:59.848260 | orchestrator | Monday 02 February 2026 05:36:57 +0000 (0:00:02.427) 0:03:24.698 ******* 2026-02-02 05:36:59.848267 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.848273 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.848280 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.848292 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.848299 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:36:59.848307 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:36:59.848350 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:36:59.848356 | orchestrator | 2026-02-02 05:36:59.848360 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-02 05:36:59.848364 | orchestrator | Monday 02 February 2026 05:36:58 +0000 (0:00:01.864) 0:03:26.563 ******* 2026-02-02 05:36:59.848367 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:36:59.848371 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:36:59.848375 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:36:59.848380 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 05:36:59.848385 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 05:36:59.848389 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:36:59.848398 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 05:37:24.654650 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 05:37:24.654760 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:24.654775 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 05:37:24.654786 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 05:37:24.654796 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:24.654805 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:37:24.654815 | orchestrator | 2026-02-02 05:37:24.654826 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-02 05:37:24.654837 | orchestrator | Monday 02 February 2026 05:37:01 +0000 (0:00:02.205) 0:03:28.768 ******* 2026-02-02 05:37:24.654846 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:37:24.654856 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:37:24.654865 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:37:24.654876 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:24.654885 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:24.654895 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:24.654904 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:37:24.654914 | orchestrator | 2026-02-02 05:37:24.654923 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-02 05:37:24.654933 | orchestrator | Monday 02 February 2026 05:37:03 +0000 (0:00:01.860) 0:03:30.629 ******* 2026-02-02 05:37:24.654942 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:37:24.654950 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:37:24.654958 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:37:24.654966 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:24.654974 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:24.654981 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:24.654989 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:37:24.654997 | orchestrator | 2026-02-02 05:37:24.655005 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-02 05:37:24.655013 | orchestrator | Monday 02 February 2026 05:37:05 +0000 (0:00:02.116) 0:03:32.746 ******* 2026-02-02 05:37:24.655020 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:37:24.655028 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:37:24.655036 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:37:24.655044 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:24.655075 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:24.655083 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:24.655091 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:37:24.655099 | orchestrator | 2026-02-02 05:37:24.655106 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-02 05:37:24.655114 | orchestrator | Monday 02 February 2026 05:37:07 +0000 (0:00:01.959) 0:03:34.705 ******* 2026-02-02 05:37:24.655122 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:37:24.655129 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:37:24.655137 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:37:24.655145 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:24.655152 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:24.655160 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:24.655168 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:37:24.655175 | orchestrator | 2026-02-02 05:37:24.655183 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-02 05:37:24.655191 | orchestrator | Monday 02 February 2026 05:37:09 +0000 (0:00:02.209) 0:03:36.914 ******* 2026-02-02 05:37:24.655201 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:37:24.655210 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:37:24.655219 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:37:24.655227 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:24.655236 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:24.655245 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:24.655253 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:37:24.655262 | orchestrator | 2026-02-02 05:37:24.655271 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-02 05:37:24.655280 | orchestrator | Monday 02 February 2026 05:37:11 +0000 (0:00:02.054) 0:03:38.969 ******* 2026-02-02 05:37:24.655290 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:37:24.655311 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:37:24.655320 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:37:24.655353 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:24.655362 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:24.655371 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:24.655380 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:37:24.655389 | orchestrator | 2026-02-02 05:37:24.655398 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-02 05:37:24.655407 | orchestrator | Monday 02 February 2026 05:37:13 +0000 (0:00:01.882) 0:03:40.851 ******* 2026-02-02 05:37:24.655416 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:37:24.655426 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:37:24.655435 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:37:24.655444 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:37:24.655454 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 05:37:24.655462 | orchestrator | 2026-02-02 05:37:24.655470 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-02 05:37:24.655478 | orchestrator | Monday 02 February 2026 05:37:15 +0000 (0:00:02.572) 0:03:43.424 ******* 2026-02-02 05:37:24.655486 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:37:24.655494 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:37:24.655502 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:37:24.655510 | orchestrator | 2026-02-02 05:37:24.655517 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-02 05:37:24.655525 | orchestrator | Monday 02 February 2026 05:37:17 +0000 (0:00:01.440) 0:03:44.864 ******* 2026-02-02 05:37:24.655547 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 05:37:24.655555 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 05:37:24.655577 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:24.655585 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 05:37:24.655593 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 05:37:24.655601 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:24.655609 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 05:37:24.655617 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 05:37:24.655625 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:24.655633 | orchestrator | 2026-02-02 05:37:24.655641 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-02 05:37:24.655649 | orchestrator | Monday 02 February 2026 05:37:18 +0000 (0:00:01.457) 0:03:46.322 ******* 2026-02-02 05:37:24.655658 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:24.655668 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:24.655677 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:24.655685 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:24.655693 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:24.655701 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:24.655709 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:24.655721 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:24.655729 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:24.655737 | orchestrator | 2026-02-02 05:37:24.655746 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-02 05:37:24.655754 | orchestrator | Monday 02 February 2026 05:37:20 +0000 (0:00:01.701) 0:03:48.024 ******* 2026-02-02 05:37:24.655762 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:24.655769 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:24.655777 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:24.655790 | orchestrator | 2026-02-02 05:37:24.655798 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-02 05:37:24.655806 | orchestrator | Monday 02 February 2026 05:37:21 +0000 (0:00:01.552) 0:03:49.576 ******* 2026-02-02 05:37:24.655814 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:24.655822 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:24.655830 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:24.655838 | orchestrator | 2026-02-02 05:37:24.655845 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-02 05:37:24.655853 | orchestrator | Monday 02 February 2026 05:37:23 +0000 (0:00:01.336) 0:03:50.913 ******* 2026-02-02 05:37:24.655861 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:24.655874 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:29.587450 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:29.587554 | orchestrator | 2026-02-02 05:37:29.587572 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-02 05:37:29.587586 | orchestrator | Monday 02 February 2026 05:37:24 +0000 (0:00:01.313) 0:03:52.226 ******* 2026-02-02 05:37:29.587599 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:29.587612 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:29.587624 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:29.587636 | orchestrator | 2026-02-02 05:37:29.587649 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-02 05:37:29.587661 | orchestrator | Monday 02 February 2026 05:37:26 +0000 (0:00:01.459) 0:03:53.686 ******* 2026-02-02 05:37:29.587674 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'}) 2026-02-02 05:37:29.587688 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}) 2026-02-02 05:37:29.587700 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'}) 2026-02-02 05:37:29.587712 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'}) 2026-02-02 05:37:29.587724 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'}) 2026-02-02 05:37:29.587736 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'}) 2026-02-02 05:37:29.587748 | orchestrator | 2026-02-02 05:37:29.587760 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-02 05:37:29.587773 | orchestrator | Monday 02 February 2026 05:37:28 +0000 (0:00:02.053) 0:03:55.739 ******* 2026-02-02 05:37:29.587790 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379/osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 950, 'dev': 6, 'nlink': 1, 'atime': 1770003173.919426, 'mtime': 1770003173.915426, 'ctime': 1770003173.915426, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379/osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:29.587866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-af42a967-eb71-546a-abb0-a5185990ed2a/osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 960, 'dev': 6, 'nlink': 1, 'atime': 1770003192.343695, 'mtime': 1770003192.3386948, 'ctime': 1770003192.3386948, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-af42a967-eb71-546a-abb0-a5185990ed2a/osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:29.587881 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:29.587895 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89/osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1770003174.3175128, 'mtime': 1770003174.3135128, 'ctime': 1770003174.3135128, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89/osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:29.587908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19/osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1770003192.2517812, 'mtime': 1770003192.243781, 'ctime': 1770003192.243781, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19/osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:29.587985 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:29.588012 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-d54a22ee-8606-5662-853b-b39e232caa8f/osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1770003173.385977, 'mtime': 1770003173.3799767, 'ctime': 1770003173.3799767, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-d54a22ee-8606-5662-853b-b39e232caa8f/osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:35.390658 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-e4fc6918-1796-5a48-9994-5f31e91196e6/osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1770003192.8092637, 'mtime': 1770003192.8062637, 'ctime': 1770003192.8062637, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-e4fc6918-1796-5a48-9994-5f31e91196e6/osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:35.390805 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:35.390837 | orchestrator | 2026-02-02 05:37:35.390852 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-02 05:37:35.390864 | orchestrator | Monday 02 February 2026 05:37:29 +0000 (0:00:01.424) 0:03:57.164 ******* 2026-02-02 05:37:35.390876 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 05:37:35.390888 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 05:37:35.390899 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:35.390910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 05:37:35.390921 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 05:37:35.390954 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:35.390965 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 05:37:35.390976 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 05:37:35.390986 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:35.390997 | orchestrator | 2026-02-02 05:37:35.391008 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-02 05:37:35.391020 | orchestrator | Monday 02 February 2026 05:37:30 +0000 (0:00:01.408) 0:03:58.572 ******* 2026-02-02 05:37:35.391061 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:35.391077 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:35.391088 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:35.391099 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:35.391129 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:35.391141 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:35.391152 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:35.391163 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:35.391177 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:35.391190 | orchestrator | 2026-02-02 05:37:35.391202 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-02 05:37:35.391215 | orchestrator | Monday 02 February 2026 05:37:32 +0000 (0:00:01.388) 0:03:59.961 ******* 2026-02-02 05:37:35.391228 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'})  2026-02-02 05:37:35.391241 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'})  2026-02-02 05:37:35.391254 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:35.391266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'})  2026-02-02 05:37:35.391279 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'})  2026-02-02 05:37:35.391301 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:35.391313 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'})  2026-02-02 05:37:35.391326 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'})  2026-02-02 05:37:35.391370 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:35.391384 | orchestrator | 2026-02-02 05:37:35.391396 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-02 05:37:35.391409 | orchestrator | Monday 02 February 2026 05:37:34 +0000 (0:00:01.647) 0:04:01.609 ******* 2026-02-02 05:37:35.391422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-2b8f5a57-fc4d-5c4a-8869-764dca19b379', 'data_vg': 'ceph-2b8f5a57-fc4d-5c4a-8869-764dca19b379'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:35.391435 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-af42a967-eb71-546a-abb0-a5185990ed2a', 'data_vg': 'ceph-af42a967-eb71-546a-abb0-a5185990ed2a'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:35.391447 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:35.391466 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89', 'data_vg': 'ceph-6932a8d0-72db-59d0-a33a-0c6e2cbd6a89'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:35.391478 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-106e1245-4ea8-54a2-9b27-5c2b147fae19', 'data_vg': 'ceph-106e1245-4ea8-54a2-9b27-5c2b147fae19'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:35.391491 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:35.391504 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-d54a22ee-8606-5662-853b-b39e232caa8f', 'data_vg': 'ceph-d54a22ee-8606-5662-853b-b39e232caa8f'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:35.391525 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-e4fc6918-1796-5a48-9994-5f31e91196e6', 'data_vg': 'ceph-e4fc6918-1796-5a48-9994-5f31e91196e6'}, 'ansible_loop_var': 'item'})  2026-02-02 05:37:45.004768 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:45.004901 | orchestrator | 2026-02-02 05:37:45.004927 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-02 05:37:45.004949 | orchestrator | Monday 02 February 2026 05:37:35 +0000 (0:00:01.351) 0:04:02.961 ******* 2026-02-02 05:37:45.004974 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:37:45.004998 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:37:45.005017 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:37:45.005035 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:45.005052 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:45.005070 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:45.005089 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:37:45.005108 | orchestrator | 2026-02-02 05:37:45.005127 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-02 05:37:45.005179 | orchestrator | Monday 02 February 2026 05:37:37 +0000 (0:00:01.938) 0:04:04.899 ******* 2026-02-02 05:37:45.005199 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:37:45.005211 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:37:45.005222 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:37:45.005232 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:37:45.005244 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 05:37:45.005255 | orchestrator | 2026-02-02 05:37:45.005266 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-02 05:37:45.005277 | orchestrator | Monday 02 February 2026 05:37:39 +0000 (0:00:02.637) 0:04:07.537 ******* 2026-02-02 05:37:45.005289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005391 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:45.005404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005466 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:45.005479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005561 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:45.005574 | orchestrator | 2026-02-02 05:37:45.005587 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-02 05:37:45.005599 | orchestrator | Monday 02 February 2026 05:37:41 +0000 (0:00:01.448) 0:04:08.985 ******* 2026-02-02 05:37:45.005613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005706 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:45.005717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005770 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:45.005780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005833 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:45.005844 | orchestrator | 2026-02-02 05:37:45.005855 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-02 05:37:45.005866 | orchestrator | Monday 02 February 2026 05:37:43 +0000 (0:00:01.703) 0:04:10.689 ******* 2026-02-02 05:37:45.005876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005930 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:37:45.005941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.005997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.006007 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:37:45.006081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.006095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.006106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.006117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.006128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 05:37:45.006139 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:37:45.006150 | orchestrator | 2026-02-02 05:37:45.006216 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-02 05:37:45.006239 | orchestrator | Monday 02 February 2026 05:37:44 +0000 (0:00:01.416) 0:04:12.105 ******* 2026-02-02 05:37:45.006256 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:37:45.006274 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:37:45.006304 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:00.076917 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:00.077040 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:00.077063 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:00.077079 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:00.077094 | orchestrator | 2026-02-02 05:38:00.077109 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-02 05:38:00.077118 | orchestrator | Monday 02 February 2026 05:37:46 +0000 (0:00:02.046) 0:04:14.152 ******* 2026-02-02 05:38:00.077126 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:00.077135 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:00.077143 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:00.077151 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:00.077159 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:00.077166 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:00.077174 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:00.077182 | orchestrator | 2026-02-02 05:38:00.077190 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-02 05:38:00.077198 | orchestrator | Monday 02 February 2026 05:37:48 +0000 (0:00:02.200) 0:04:16.352 ******* 2026-02-02 05:38:00.077206 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:00.077214 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:00.077222 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:00.077230 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:00.077238 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:00.077245 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:00.077253 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:00.077261 | orchestrator | 2026-02-02 05:38:00.077269 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-02 05:38:00.077277 | orchestrator | Monday 02 February 2026 05:37:50 +0000 (0:00:02.179) 0:04:18.531 ******* 2026-02-02 05:38:00.077285 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:00.077293 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:00.077300 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:00.077308 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:00.077316 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:00.077324 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:00.077332 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:00.077412 | orchestrator | 2026-02-02 05:38:00.077423 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-02 05:38:00.077433 | orchestrator | Monday 02 February 2026 05:37:52 +0000 (0:00:01.918) 0:04:20.450 ******* 2026-02-02 05:38:00.077440 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:00.077448 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:00.077458 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:00.077467 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:00.077477 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:00.077486 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:00.077495 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:00.077505 | orchestrator | 2026-02-02 05:38:00.077514 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-02 05:38:00.077524 | orchestrator | Monday 02 February 2026 05:37:54 +0000 (0:00:02.083) 0:04:22.534 ******* 2026-02-02 05:38:00.077533 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:00.077543 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:00.077552 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:00.077562 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:00.077571 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:00.077581 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:00.077590 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:00.077599 | orchestrator | 2026-02-02 05:38:00.077609 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-02 05:38:00.077619 | orchestrator | Monday 02 February 2026 05:37:56 +0000 (0:00:02.026) 0:04:24.560 ******* 2026-02-02 05:38:00.077629 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:00.077638 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:00.077647 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:00.077657 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:00.077666 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:00.077675 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:00.077697 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:00.077707 | orchestrator | 2026-02-02 05:38:00.077717 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-02 05:38:00.077727 | orchestrator | Monday 02 February 2026 05:37:59 +0000 (0:00:02.229) 0:04:26.790 ******* 2026-02-02 05:38:00.077738 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:00.077749 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:00.077761 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:00.077772 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:00.077782 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:00.077794 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:00.077804 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:00.077829 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:00.077838 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:00.077853 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:00.077861 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:00.077869 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:00.077876 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:00.077884 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:00.077892 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:00.077900 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:00.077908 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:00.077916 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:00.077924 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:00.077932 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:00.077939 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:00.077947 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:00.077955 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:00.077967 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:00.077976 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:00.077984 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:00.077991 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:00.077999 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:00.078007 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:00.078063 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:00.078080 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:04.486478 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:04.487321 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:04.487346 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:04.487373 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:04.487379 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:04.487384 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:04.487389 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:04.487394 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:04.487399 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:04.487405 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:04.487410 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:04.487415 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:04.487421 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:04.487429 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:04.487436 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:04.487444 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:04.487464 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:04.487472 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:04.487479 | orchestrator | 2026-02-02 05:38:04.487488 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-02 05:38:04.487496 | orchestrator | Monday 02 February 2026 05:38:01 +0000 (0:00:02.262) 0:04:29.052 ******* 2026-02-02 05:38:04.487503 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:04.487527 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:04.487534 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:04.487540 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:04.487547 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:04.487554 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:04.487561 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:04.487568 | orchestrator | 2026-02-02 05:38:04.487576 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-02 05:38:04.487583 | orchestrator | Monday 02 February 2026 05:38:03 +0000 (0:00:02.112) 0:04:31.165 ******* 2026-02-02 05:38:04.487590 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:04.487597 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:04.487605 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:04.487624 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:04.487629 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:04.487634 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:04.487638 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:04.487643 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:04.487647 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:04.487651 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:04.487656 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:04.487660 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:04.487665 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:04.487669 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:04.487673 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:04.487678 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:04.487682 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:04.487686 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:04.487696 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:04.487700 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:04.487709 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:04.487714 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:04.487718 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:04.487722 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:04.487727 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:04.487731 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:04.487735 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:04.487740 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:04.487747 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:33.917446 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:33.917537 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:33.917547 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:33.917554 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:33.917563 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:33.917570 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:33.917577 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:33.917584 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:33.917590 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:33.917598 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-02 05:38:33.917622 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-02 05:38:33.917628 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:33.917634 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-02 05:38:33.917641 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-02 05:38:33.917647 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:33.917665 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:33.917671 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:33.917677 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-02 05:38:33.917683 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-02 05:38:33.917689 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:33.917696 | orchestrator | 2026-02-02 05:38:33.917703 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-02 05:38:33.917710 | orchestrator | Monday 02 February 2026 05:38:05 +0000 (0:00:02.251) 0:04:33.417 ******* 2026-02-02 05:38:33.917716 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:33.917723 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:33.917729 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:33.917735 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:33.917741 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:33.917747 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:33.917753 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:33.917759 | orchestrator | 2026-02-02 05:38:33.917765 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-02 05:38:33.917772 | orchestrator | Monday 02 February 2026 05:38:08 +0000 (0:00:02.207) 0:04:35.624 ******* 2026-02-02 05:38:33.917778 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:33.917784 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:33.917790 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:33.917796 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:33.917802 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:33.917808 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:33.917814 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:33.917820 | orchestrator | 2026-02-02 05:38:33.917826 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-02 05:38:33.917845 | orchestrator | Monday 02 February 2026 05:38:10 +0000 (0:00:02.081) 0:04:37.706 ******* 2026-02-02 05:38:33.917852 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:33.917858 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:33.917864 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:33.917870 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:33.917876 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:33.917882 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:33.917888 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:33.917900 | orchestrator | 2026-02-02 05:38:33.917907 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-02 05:38:33.917913 | orchestrator | Monday 02 February 2026 05:38:12 +0000 (0:00:02.455) 0:04:40.162 ******* 2026-02-02 05:38:33.917919 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-02 05:38:33.917927 | orchestrator | 2026-02-02 05:38:33.917933 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-02 05:38:33.917939 | orchestrator | Monday 02 February 2026 05:38:15 +0000 (0:00:02.818) 0:04:42.980 ******* 2026-02-02 05:38:33.917945 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-02 05:38:33.917952 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-02 05:38:33.917958 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-02 05:38:33.917964 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-02 05:38:33.917971 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-02 05:38:33.917978 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-02 05:38:33.917985 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-02 05:38:33.917992 | orchestrator | 2026-02-02 05:38:33.917999 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-02 05:38:33.918007 | orchestrator | Monday 02 February 2026 05:38:17 +0000 (0:00:02.124) 0:04:45.104 ******* 2026-02-02 05:38:33.918057 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:33.918067 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:33.918073 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:33.918080 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:33.918088 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:33.918095 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:33.918102 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:33.918110 | orchestrator | 2026-02-02 05:38:33.918117 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-02 05:38:33.918125 | orchestrator | Monday 02 February 2026 05:38:19 +0000 (0:00:02.233) 0:04:47.337 ******* 2026-02-02 05:38:33.918132 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:33.918139 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:33.918146 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:33.918153 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:33.918160 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:33.918167 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:33.918174 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:33.918181 | orchestrator | 2026-02-02 05:38:33.918187 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-02 05:38:33.918193 | orchestrator | Monday 02 February 2026 05:38:21 +0000 (0:00:02.080) 0:04:49.418 ******* 2026-02-02 05:38:33.918204 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:38:33.918210 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:38:33.918217 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:38:33.918223 | orchestrator | ok: [testbed-node-3] 2026-02-02 05:38:33.918229 | orchestrator | ok: [testbed-node-4] 2026-02-02 05:38:33.918235 | orchestrator | ok: [testbed-node-5] 2026-02-02 05:38:33.918241 | orchestrator | ok: [testbed-manager] 2026-02-02 05:38:33.918247 | orchestrator | 2026-02-02 05:38:33.918253 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-02 05:38:33.918259 | orchestrator | Monday 02 February 2026 05:38:24 +0000 (0:00:02.841) 0:04:52.259 ******* 2026-02-02 05:38:33.918265 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:33.918272 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:33.918284 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:33.918295 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:33.918305 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:33.918315 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:33.918325 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:33.918339 | orchestrator | 2026-02-02 05:38:33.918349 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-02 05:38:33.918359 | orchestrator | Monday 02 February 2026 05:38:27 +0000 (0:00:02.579) 0:04:54.839 ******* 2026-02-02 05:38:33.918391 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:33.918401 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:38:33.918410 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:38:33.918421 | orchestrator | skipping: [testbed-node-3] 2026-02-02 05:38:33.918431 | orchestrator | skipping: [testbed-node-4] 2026-02-02 05:38:33.918441 | orchestrator | skipping: [testbed-node-5] 2026-02-02 05:38:33.918451 | orchestrator | skipping: [testbed-manager] 2026-02-02 05:38:33.918459 | orchestrator | 2026-02-02 05:38:33.918469 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-02 05:38:33.918480 | orchestrator | Monday 02 February 2026 05:38:29 +0000 (0:00:02.399) 0:04:57.239 ******* 2026-02-02 05:38:33.918491 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:38:33.918501 | orchestrator | 2026-02-02 05:38:33.918511 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-02 05:38:33.918522 | orchestrator | Monday 02 February 2026 05:38:32 +0000 (0:00:02.592) 0:04:59.831 ******* 2026-02-02 05:38:33.918533 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:38:33.918543 | orchestrator | 2026-02-02 05:38:33.918557 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-02 05:39:13.273373 | orchestrator | 2026-02-02 05:39:13.273529 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 05:39:13.273548 | orchestrator | Monday 02 February 2026 05:38:33 +0000 (0:00:01.659) 0:05:01.491 ******* 2026-02-02 05:39:13.273561 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:13.273573 | orchestrator | 2026-02-02 05:39:13.273584 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 05:39:13.273596 | orchestrator | Monday 02 February 2026 05:38:35 +0000 (0:00:01.498) 0:05:02.990 ******* 2026-02-02 05:39:13.273607 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:13.273618 | orchestrator | 2026-02-02 05:39:13.273629 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-02 05:39:13.273640 | orchestrator | Monday 02 February 2026 05:38:36 +0000 (0:00:01.119) 0:05:04.110 ******* 2026-02-02 05:39:13.273653 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-02 05:39:13.273666 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-02 05:39:13.273677 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-02 05:39:13.273689 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-02 05:39:13.273728 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-02 05:39:13.273755 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}])  2026-02-02 05:39:13.273769 | orchestrator | 2026-02-02 05:39:13.273780 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-02 05:39:13.273791 | orchestrator | 2026-02-02 05:39:13.273802 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-02 05:39:13.273813 | orchestrator | Monday 02 February 2026 05:38:46 +0000 (0:00:10.148) 0:05:14.259 ******* 2026-02-02 05:39:13.273824 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:13.273835 | orchestrator | 2026-02-02 05:39:13.273846 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-02 05:39:13.273857 | orchestrator | Monday 02 February 2026 05:38:48 +0000 (0:00:01.488) 0:05:15.747 ******* 2026-02-02 05:39:13.273868 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:13.273879 | orchestrator | 2026-02-02 05:39:13.273889 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-02 05:39:13.273900 | orchestrator | Monday 02 February 2026 05:38:49 +0000 (0:00:01.143) 0:05:16.891 ******* 2026-02-02 05:39:13.273912 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:13.273925 | orchestrator | 2026-02-02 05:39:13.273938 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-02 05:39:13.273951 | orchestrator | Monday 02 February 2026 05:38:50 +0000 (0:00:01.149) 0:05:18.041 ******* 2026-02-02 05:39:13.273964 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:13.273977 | orchestrator | 2026-02-02 05:39:13.273990 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 05:39:13.274003 | orchestrator | Monday 02 February 2026 05:38:51 +0000 (0:00:01.133) 0:05:19.174 ******* 2026-02-02 05:39:13.274070 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-02 05:39:13.274084 | orchestrator | 2026-02-02 05:39:13.274097 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 05:39:13.274127 | orchestrator | Monday 02 February 2026 05:38:52 +0000 (0:00:01.095) 0:05:20.270 ******* 2026-02-02 05:39:13.274142 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:13.274154 | orchestrator | 2026-02-02 05:39:13.274167 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 05:39:13.274180 | orchestrator | Monday 02 February 2026 05:38:54 +0000 (0:00:01.467) 0:05:21.737 ******* 2026-02-02 05:39:13.274192 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:13.274205 | orchestrator | 2026-02-02 05:39:13.274218 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 05:39:13.274230 | orchestrator | Monday 02 February 2026 05:38:55 +0000 (0:00:01.183) 0:05:22.921 ******* 2026-02-02 05:39:13.274243 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:13.274255 | orchestrator | 2026-02-02 05:39:13.274268 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 05:39:13.274279 | orchestrator | Monday 02 February 2026 05:38:56 +0000 (0:00:01.448) 0:05:24.369 ******* 2026-02-02 05:39:13.274290 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:13.274311 | orchestrator | 2026-02-02 05:39:13.274323 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 05:39:13.274334 | orchestrator | Monday 02 February 2026 05:38:57 +0000 (0:00:01.151) 0:05:25.520 ******* 2026-02-02 05:39:13.274344 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:13.274355 | orchestrator | 2026-02-02 05:39:13.274366 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 05:39:13.274377 | orchestrator | Monday 02 February 2026 05:38:59 +0000 (0:00:01.176) 0:05:26.697 ******* 2026-02-02 05:39:13.274405 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:13.274417 | orchestrator | 2026-02-02 05:39:13.274427 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 05:39:13.274439 | orchestrator | Monday 02 February 2026 05:39:00 +0000 (0:00:01.245) 0:05:27.943 ******* 2026-02-02 05:39:13.274450 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:13.274461 | orchestrator | 2026-02-02 05:39:13.274471 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 05:39:13.274482 | orchestrator | Monday 02 February 2026 05:39:01 +0000 (0:00:01.142) 0:05:29.085 ******* 2026-02-02 05:39:13.274493 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:13.274504 | orchestrator | 2026-02-02 05:39:13.274515 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 05:39:13.274526 | orchestrator | Monday 02 February 2026 05:39:02 +0000 (0:00:01.128) 0:05:30.214 ******* 2026-02-02 05:39:13.274537 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:39:13.274548 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:39:13.274559 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:39:13.274570 | orchestrator | 2026-02-02 05:39:13.274581 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 05:39:13.274591 | orchestrator | Monday 02 February 2026 05:39:04 +0000 (0:00:01.653) 0:05:31.867 ******* 2026-02-02 05:39:13.274602 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:13.274613 | orchestrator | 2026-02-02 05:39:13.274624 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 05:39:13.274635 | orchestrator | Monday 02 February 2026 05:39:05 +0000 (0:00:01.276) 0:05:33.144 ******* 2026-02-02 05:39:13.274645 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:39:13.274657 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:39:13.274668 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:39:13.274678 | orchestrator | 2026-02-02 05:39:13.274689 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 05:39:13.274706 | orchestrator | Monday 02 February 2026 05:39:08 +0000 (0:00:03.165) 0:05:36.309 ******* 2026-02-02 05:39:13.274717 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 05:39:13.274729 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 05:39:13.274739 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 05:39:13.274750 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:13.274761 | orchestrator | 2026-02-02 05:39:13.274772 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 05:39:13.274783 | orchestrator | Monday 02 February 2026 05:39:10 +0000 (0:00:01.428) 0:05:37.739 ******* 2026-02-02 05:39:13.274796 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 05:39:13.274809 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 05:39:13.274827 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 05:39:13.274842 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:13.274861 | orchestrator | 2026-02-02 05:39:13.274879 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 05:39:13.274903 | orchestrator | Monday 02 February 2026 05:39:12 +0000 (0:00:01.943) 0:05:39.682 ******* 2026-02-02 05:39:13.274944 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:39:33.420572 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:39:33.420685 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:39:33.420734 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:33.420745 | orchestrator | 2026-02-02 05:39:33.420754 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 05:39:33.420763 | orchestrator | Monday 02 February 2026 05:39:13 +0000 (0:00:01.161) 0:05:40.844 ******* 2026-02-02 05:39:33.420772 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'fef826d0639c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 05:39:06.109747', 'end': '2026-02-02 05:39:06.167171', 'delta': '0:00:00.057424', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fef826d0639c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 05:39:33.420796 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a42e682d4965', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 05:39:06.672420', 'end': '2026-02-02 05:39:06.734803', 'delta': '0:00:00.062383', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a42e682d4965'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 05:39:33.420804 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '39d29fabc2d2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 05:39:07.540598', 'end': '2026-02-02 05:39:07.584749', 'delta': '0:00:00.044151', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['39d29fabc2d2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 05:39:33.420831 | orchestrator | 2026-02-02 05:39:33.420839 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 05:39:33.420858 | orchestrator | Monday 02 February 2026 05:39:14 +0000 (0:00:01.191) 0:05:42.036 ******* 2026-02-02 05:39:33.420866 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:33.420883 | orchestrator | 2026-02-02 05:39:33.420890 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 05:39:33.420898 | orchestrator | Monday 02 February 2026 05:39:16 +0000 (0:00:01.651) 0:05:43.688 ******* 2026-02-02 05:39:33.420905 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:33.420912 | orchestrator | 2026-02-02 05:39:33.420919 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 05:39:33.420927 | orchestrator | Monday 02 February 2026 05:39:17 +0000 (0:00:01.212) 0:05:44.900 ******* 2026-02-02 05:39:33.420934 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:33.420941 | orchestrator | 2026-02-02 05:39:33.420948 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 05:39:33.420955 | orchestrator | Monday 02 February 2026 05:39:18 +0000 (0:00:01.190) 0:05:46.091 ******* 2026-02-02 05:39:33.420976 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-02 05:39:33.420984 | orchestrator | 2026-02-02 05:39:33.420992 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 05:39:33.421001 | orchestrator | Monday 02 February 2026 05:39:20 +0000 (0:00:02.023) 0:05:48.114 ******* 2026-02-02 05:39:33.421009 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:33.421018 | orchestrator | 2026-02-02 05:39:33.421026 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 05:39:33.421035 | orchestrator | Monday 02 February 2026 05:39:21 +0000 (0:00:01.149) 0:05:49.264 ******* 2026-02-02 05:39:33.421044 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:33.421052 | orchestrator | 2026-02-02 05:39:33.421061 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 05:39:33.421069 | orchestrator | Monday 02 February 2026 05:39:22 +0000 (0:00:01.122) 0:05:50.386 ******* 2026-02-02 05:39:33.421078 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:33.421086 | orchestrator | 2026-02-02 05:39:33.421095 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 05:39:33.421103 | orchestrator | Monday 02 February 2026 05:39:24 +0000 (0:00:01.225) 0:05:51.612 ******* 2026-02-02 05:39:33.421112 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:33.421120 | orchestrator | 2026-02-02 05:39:33.421129 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 05:39:33.421137 | orchestrator | Monday 02 February 2026 05:39:25 +0000 (0:00:01.116) 0:05:52.729 ******* 2026-02-02 05:39:33.421146 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:33.421154 | orchestrator | 2026-02-02 05:39:33.421163 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 05:39:33.421171 | orchestrator | Monday 02 February 2026 05:39:26 +0000 (0:00:01.217) 0:05:53.946 ******* 2026-02-02 05:39:33.421180 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:33.421189 | orchestrator | 2026-02-02 05:39:33.421197 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 05:39:33.421206 | orchestrator | Monday 02 February 2026 05:39:27 +0000 (0:00:01.120) 0:05:55.067 ******* 2026-02-02 05:39:33.421215 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:33.421223 | orchestrator | 2026-02-02 05:39:33.421239 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 05:39:33.421248 | orchestrator | Monday 02 February 2026 05:39:28 +0000 (0:00:01.090) 0:05:56.158 ******* 2026-02-02 05:39:33.421257 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:33.421265 | orchestrator | 2026-02-02 05:39:33.421274 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 05:39:33.421282 | orchestrator | Monday 02 February 2026 05:39:29 +0000 (0:00:01.105) 0:05:57.264 ******* 2026-02-02 05:39:33.421291 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:33.421300 | orchestrator | 2026-02-02 05:39:33.421309 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 05:39:33.421318 | orchestrator | Monday 02 February 2026 05:39:30 +0000 (0:00:01.234) 0:05:58.498 ******* 2026-02-02 05:39:33.421326 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:33.421336 | orchestrator | 2026-02-02 05:39:33.421344 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 05:39:33.421353 | orchestrator | Monday 02 February 2026 05:39:32 +0000 (0:00:01.217) 0:05:59.716 ******* 2026-02-02 05:39:33.421367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:39:33.421377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:39:33.421386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:39:33.421429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 05:39:33.421448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:39:34.663337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:39:34.663477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:39:34.663521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91f9e36e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:39:34.663531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:39:34.663536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:39:34.663541 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:34.663547 | orchestrator | 2026-02-02 05:39:34.663552 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 05:39:34.663558 | orchestrator | Monday 02 February 2026 05:39:33 +0000 (0:00:01.266) 0:06:00.982 ******* 2026-02-02 05:39:34.663576 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:39:34.663586 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:39:34.663592 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:39:34.663600 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:39:34.663606 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:39:34.663611 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:39:34.663621 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:39:58.640932 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91f9e36e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:39:58.641030 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:39:58.641042 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:39:58.641050 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:58.641058 | orchestrator | 2026-02-02 05:39:58.641066 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 05:39:58.641099 | orchestrator | Monday 02 February 2026 05:39:34 +0000 (0:00:01.253) 0:06:02.236 ******* 2026-02-02 05:39:58.641106 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:58.641114 | orchestrator | 2026-02-02 05:39:58.641120 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 05:39:58.641126 | orchestrator | Monday 02 February 2026 05:39:36 +0000 (0:00:01.520) 0:06:03.756 ******* 2026-02-02 05:39:58.641133 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:58.641139 | orchestrator | 2026-02-02 05:39:58.641145 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 05:39:58.641163 | orchestrator | Monday 02 February 2026 05:39:37 +0000 (0:00:01.112) 0:06:04.869 ******* 2026-02-02 05:39:58.641170 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:39:58.641177 | orchestrator | 2026-02-02 05:39:58.641183 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 05:39:58.641189 | orchestrator | Monday 02 February 2026 05:39:38 +0000 (0:00:01.438) 0:06:06.307 ******* 2026-02-02 05:39:58.641195 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:58.641201 | orchestrator | 2026-02-02 05:39:58.641207 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 05:39:58.641214 | orchestrator | Monday 02 February 2026 05:39:39 +0000 (0:00:01.161) 0:06:07.469 ******* 2026-02-02 05:39:58.641220 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:58.641226 | orchestrator | 2026-02-02 05:39:58.641232 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 05:39:58.641239 | orchestrator | Monday 02 February 2026 05:39:41 +0000 (0:00:01.266) 0:06:08.735 ******* 2026-02-02 05:39:58.641245 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:58.641251 | orchestrator | 2026-02-02 05:39:58.641257 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 05:39:58.641263 | orchestrator | Monday 02 February 2026 05:39:42 +0000 (0:00:01.173) 0:06:09.909 ******* 2026-02-02 05:39:58.641270 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:39:58.641276 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-02 05:39:58.641282 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-02 05:39:58.641288 | orchestrator | 2026-02-02 05:39:58.641294 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 05:39:58.641301 | orchestrator | Monday 02 February 2026 05:39:44 +0000 (0:00:01.932) 0:06:11.842 ******* 2026-02-02 05:39:58.641307 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 05:39:58.641313 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 05:39:58.641319 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 05:39:58.641325 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:58.641332 | orchestrator | 2026-02-02 05:39:58.641338 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 05:39:58.641349 | orchestrator | Monday 02 February 2026 05:39:45 +0000 (0:00:01.164) 0:06:13.006 ******* 2026-02-02 05:39:58.641355 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:58.641361 | orchestrator | 2026-02-02 05:39:58.641367 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 05:39:58.641373 | orchestrator | Monday 02 February 2026 05:39:46 +0000 (0:00:01.141) 0:06:14.147 ******* 2026-02-02 05:39:58.641380 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:39:58.641386 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:39:58.641393 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:39:58.641399 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 05:39:58.641428 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 05:39:58.641436 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 05:39:58.641451 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 05:39:58.641458 | orchestrator | 2026-02-02 05:39:58.641465 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 05:39:58.641472 | orchestrator | Monday 02 February 2026 05:39:48 +0000 (0:00:02.187) 0:06:16.335 ******* 2026-02-02 05:39:58.641479 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:39:58.641486 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:39:58.641495 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:39:58.641502 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 05:39:58.641509 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 05:39:58.641516 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 05:39:58.641523 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 05:39:58.641530 | orchestrator | 2026-02-02 05:39:58.641538 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-02 05:39:58.641545 | orchestrator | Monday 02 February 2026 05:39:51 +0000 (0:00:03.001) 0:06:19.336 ******* 2026-02-02 05:39:58.641552 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-02 05:39:58.641559 | orchestrator | 2026-02-02 05:39:58.641566 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-02 05:39:58.641574 | orchestrator | Monday 02 February 2026 05:39:53 +0000 (0:00:02.242) 0:06:21.579 ******* 2026-02-02 05:39:58.641581 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:58.641588 | orchestrator | 2026-02-02 05:39:58.641595 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-02 05:39:58.641602 | orchestrator | Monday 02 February 2026 05:39:55 +0000 (0:00:01.238) 0:06:22.817 ******* 2026-02-02 05:39:58.641610 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:39:58.641617 | orchestrator | 2026-02-02 05:39:58.641624 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-02 05:39:58.641631 | orchestrator | Monday 02 February 2026 05:39:56 +0000 (0:00:01.129) 0:06:23.947 ******* 2026-02-02 05:39:58.641639 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-02 05:39:58.641646 | orchestrator | 2026-02-02 05:39:58.641653 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-02 05:39:58.641664 | orchestrator | Monday 02 February 2026 05:39:58 +0000 (0:00:02.261) 0:06:26.208 ******* 2026-02-02 05:40:59.360164 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.360470 | orchestrator | 2026-02-02 05:40:59.360499 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-02 05:40:59.360513 | orchestrator | Monday 02 February 2026 05:39:59 +0000 (0:00:01.167) 0:06:27.376 ******* 2026-02-02 05:40:59.360525 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:40:59.360537 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:40:59.360549 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:40:59.360559 | orchestrator | 2026-02-02 05:40:59.360572 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-02 05:40:59.360583 | orchestrator | Monday 02 February 2026 05:40:02 +0000 (0:00:02.564) 0:06:29.941 ******* 2026-02-02 05:40:59.360594 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-02 05:40:59.360605 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-02 05:40:59.360617 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-02 05:40:59.360628 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-02 05:40:59.360663 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-02 05:40:59.360676 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-02 05:40:59.360686 | orchestrator | 2026-02-02 05:40:59.360697 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-02 05:40:59.360710 | orchestrator | Monday 02 February 2026 05:40:15 +0000 (0:00:13.114) 0:06:43.055 ******* 2026-02-02 05:40:59.360723 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:40:59.360750 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:40:59.360763 | orchestrator | 2026-02-02 05:40:59.360776 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-02 05:40:59.360789 | orchestrator | Monday 02 February 2026 05:40:19 +0000 (0:00:03.649) 0:06:46.705 ******* 2026-02-02 05:40:59.360802 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:40:59.360815 | orchestrator | 2026-02-02 05:40:59.360827 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 05:40:59.360840 | orchestrator | Monday 02 February 2026 05:40:21 +0000 (0:00:02.444) 0:06:49.150 ******* 2026-02-02 05:40:59.360853 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-02 05:40:59.360865 | orchestrator | 2026-02-02 05:40:59.360876 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 05:40:59.360887 | orchestrator | Monday 02 February 2026 05:40:23 +0000 (0:00:01.455) 0:06:50.606 ******* 2026-02-02 05:40:59.360898 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-02 05:40:59.360909 | orchestrator | 2026-02-02 05:40:59.360920 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 05:40:59.360930 | orchestrator | Monday 02 February 2026 05:40:24 +0000 (0:00:01.543) 0:06:52.149 ******* 2026-02-02 05:40:59.360941 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:40:59.360952 | orchestrator | 2026-02-02 05:40:59.360963 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 05:40:59.360973 | orchestrator | Monday 02 February 2026 05:40:26 +0000 (0:00:01.561) 0:06:53.710 ******* 2026-02-02 05:40:59.360984 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.360995 | orchestrator | 2026-02-02 05:40:59.361006 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 05:40:59.361016 | orchestrator | Monday 02 February 2026 05:40:27 +0000 (0:00:01.171) 0:06:54.882 ******* 2026-02-02 05:40:59.361027 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.361038 | orchestrator | 2026-02-02 05:40:59.361049 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 05:40:59.361059 | orchestrator | Monday 02 February 2026 05:40:28 +0000 (0:00:01.112) 0:06:55.995 ******* 2026-02-02 05:40:59.361070 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.361081 | orchestrator | 2026-02-02 05:40:59.361092 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 05:40:59.361102 | orchestrator | Monday 02 February 2026 05:40:29 +0000 (0:00:01.122) 0:06:57.118 ******* 2026-02-02 05:40:59.361113 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:40:59.361124 | orchestrator | 2026-02-02 05:40:59.361135 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 05:40:59.361145 | orchestrator | Monday 02 February 2026 05:40:31 +0000 (0:00:01.542) 0:06:58.660 ******* 2026-02-02 05:40:59.361156 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.361167 | orchestrator | 2026-02-02 05:40:59.361178 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 05:40:59.361188 | orchestrator | Monday 02 February 2026 05:40:32 +0000 (0:00:01.164) 0:06:59.825 ******* 2026-02-02 05:40:59.361199 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.361219 | orchestrator | 2026-02-02 05:40:59.361230 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 05:40:59.361240 | orchestrator | Monday 02 February 2026 05:40:33 +0000 (0:00:01.165) 0:07:00.991 ******* 2026-02-02 05:40:59.361251 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:40:59.361262 | orchestrator | 2026-02-02 05:40:59.361273 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 05:40:59.361284 | orchestrator | Monday 02 February 2026 05:40:34 +0000 (0:00:01.552) 0:07:02.544 ******* 2026-02-02 05:40:59.361294 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:40:59.361305 | orchestrator | 2026-02-02 05:40:59.361334 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 05:40:59.361346 | orchestrator | Monday 02 February 2026 05:40:36 +0000 (0:00:01.518) 0:07:04.062 ******* 2026-02-02 05:40:59.361357 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.361367 | orchestrator | 2026-02-02 05:40:59.361378 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 05:40:59.361389 | orchestrator | Monday 02 February 2026 05:40:37 +0000 (0:00:01.109) 0:07:05.173 ******* 2026-02-02 05:40:59.361400 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:40:59.361411 | orchestrator | 2026-02-02 05:40:59.361422 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 05:40:59.361503 | orchestrator | Monday 02 February 2026 05:40:38 +0000 (0:00:01.161) 0:07:06.334 ******* 2026-02-02 05:40:59.361515 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.361526 | orchestrator | 2026-02-02 05:40:59.361537 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 05:40:59.361547 | orchestrator | Monday 02 February 2026 05:40:39 +0000 (0:00:01.130) 0:07:07.465 ******* 2026-02-02 05:40:59.361558 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.361568 | orchestrator | 2026-02-02 05:40:59.361579 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 05:40:59.361590 | orchestrator | Monday 02 February 2026 05:40:40 +0000 (0:00:01.107) 0:07:08.573 ******* 2026-02-02 05:40:59.361600 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.361611 | orchestrator | 2026-02-02 05:40:59.361622 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 05:40:59.361632 | orchestrator | Monday 02 February 2026 05:40:42 +0000 (0:00:01.105) 0:07:09.678 ******* 2026-02-02 05:40:59.361643 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.361654 | orchestrator | 2026-02-02 05:40:59.361665 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 05:40:59.361676 | orchestrator | Monday 02 February 2026 05:40:43 +0000 (0:00:01.134) 0:07:10.812 ******* 2026-02-02 05:40:59.361686 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.361697 | orchestrator | 2026-02-02 05:40:59.361708 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 05:40:59.361725 | orchestrator | Monday 02 February 2026 05:40:44 +0000 (0:00:01.190) 0:07:12.003 ******* 2026-02-02 05:40:59.361735 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:40:59.361746 | orchestrator | 2026-02-02 05:40:59.361757 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 05:40:59.361768 | orchestrator | Monday 02 February 2026 05:40:45 +0000 (0:00:01.190) 0:07:13.193 ******* 2026-02-02 05:40:59.361778 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:40:59.361789 | orchestrator | 2026-02-02 05:40:59.361800 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 05:40:59.361810 | orchestrator | Monday 02 February 2026 05:40:46 +0000 (0:00:01.185) 0:07:14.379 ******* 2026-02-02 05:40:59.361821 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:40:59.361831 | orchestrator | 2026-02-02 05:40:59.361842 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 05:40:59.361853 | orchestrator | Monday 02 February 2026 05:40:47 +0000 (0:00:01.128) 0:07:15.507 ******* 2026-02-02 05:40:59.361864 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.361882 | orchestrator | 2026-02-02 05:40:59.361893 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 05:40:59.361903 | orchestrator | Monday 02 February 2026 05:40:49 +0000 (0:00:01.117) 0:07:16.625 ******* 2026-02-02 05:40:59.361914 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.361925 | orchestrator | 2026-02-02 05:40:59.361935 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 05:40:59.361946 | orchestrator | Monday 02 February 2026 05:40:50 +0000 (0:00:01.126) 0:07:17.751 ******* 2026-02-02 05:40:59.361957 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.361968 | orchestrator | 2026-02-02 05:40:59.361978 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 05:40:59.361989 | orchestrator | Monday 02 February 2026 05:40:51 +0000 (0:00:01.186) 0:07:18.937 ******* 2026-02-02 05:40:59.362000 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.362010 | orchestrator | 2026-02-02 05:40:59.362079 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 05:40:59.362091 | orchestrator | Monday 02 February 2026 05:40:52 +0000 (0:00:01.116) 0:07:20.054 ******* 2026-02-02 05:40:59.362102 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.362113 | orchestrator | 2026-02-02 05:40:59.362124 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 05:40:59.362134 | orchestrator | Monday 02 February 2026 05:40:53 +0000 (0:00:01.096) 0:07:21.151 ******* 2026-02-02 05:40:59.362145 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.362156 | orchestrator | 2026-02-02 05:40:59.362167 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 05:40:59.362178 | orchestrator | Monday 02 February 2026 05:40:54 +0000 (0:00:01.167) 0:07:22.318 ******* 2026-02-02 05:40:59.362189 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.362199 | orchestrator | 2026-02-02 05:40:59.362210 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 05:40:59.362221 | orchestrator | Monday 02 February 2026 05:40:55 +0000 (0:00:01.211) 0:07:23.530 ******* 2026-02-02 05:40:59.362232 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.362243 | orchestrator | 2026-02-02 05:40:59.362254 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 05:40:59.362265 | orchestrator | Monday 02 February 2026 05:40:57 +0000 (0:00:01.165) 0:07:24.696 ******* 2026-02-02 05:40:59.362275 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.362286 | orchestrator | 2026-02-02 05:40:59.362297 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 05:40:59.362308 | orchestrator | Monday 02 February 2026 05:40:58 +0000 (0:00:01.102) 0:07:25.798 ******* 2026-02-02 05:40:59.362319 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:40:59.362330 | orchestrator | 2026-02-02 05:40:59.362341 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 05:40:59.362352 | orchestrator | Monday 02 February 2026 05:40:59 +0000 (0:00:01.132) 0:07:26.931 ******* 2026-02-02 05:41:51.891819 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.891940 | orchestrator | 2026-02-02 05:41:51.891958 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 05:41:51.891971 | orchestrator | Monday 02 February 2026 05:41:00 +0000 (0:00:01.125) 0:07:28.056 ******* 2026-02-02 05:41:51.891982 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.891993 | orchestrator | 2026-02-02 05:41:51.892004 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 05:41:51.892015 | orchestrator | Monday 02 February 2026 05:41:01 +0000 (0:00:01.170) 0:07:29.227 ******* 2026-02-02 05:41:51.892026 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:41:51.892038 | orchestrator | 2026-02-02 05:41:51.892049 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 05:41:51.892060 | orchestrator | Monday 02 February 2026 05:41:03 +0000 (0:00:01.923) 0:07:31.151 ******* 2026-02-02 05:41:51.892071 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:41:51.892107 | orchestrator | 2026-02-02 05:41:51.892119 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 05:41:51.892129 | orchestrator | Monday 02 February 2026 05:41:06 +0000 (0:00:02.494) 0:07:33.646 ******* 2026-02-02 05:41:51.892140 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-02 05:41:51.892152 | orchestrator | 2026-02-02 05:41:51.892163 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 05:41:51.892174 | orchestrator | Monday 02 February 2026 05:41:07 +0000 (0:00:01.445) 0:07:35.092 ******* 2026-02-02 05:41:51.892184 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.892195 | orchestrator | 2026-02-02 05:41:51.892206 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 05:41:51.892217 | orchestrator | Monday 02 February 2026 05:41:08 +0000 (0:00:01.111) 0:07:36.204 ******* 2026-02-02 05:41:51.892228 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.892239 | orchestrator | 2026-02-02 05:41:51.892250 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 05:41:51.892262 | orchestrator | Monday 02 February 2026 05:41:09 +0000 (0:00:01.149) 0:07:37.353 ******* 2026-02-02 05:41:51.892287 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 05:41:51.892300 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 05:41:51.892314 | orchestrator | 2026-02-02 05:41:51.892326 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 05:41:51.892338 | orchestrator | Monday 02 February 2026 05:41:11 +0000 (0:00:01.841) 0:07:39.195 ******* 2026-02-02 05:41:51.892350 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:41:51.892363 | orchestrator | 2026-02-02 05:41:51.892376 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 05:41:51.892388 | orchestrator | Monday 02 February 2026 05:41:13 +0000 (0:00:01.683) 0:07:40.879 ******* 2026-02-02 05:41:51.892400 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.892412 | orchestrator | 2026-02-02 05:41:51.892424 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 05:41:51.892437 | orchestrator | Monday 02 February 2026 05:41:14 +0000 (0:00:01.138) 0:07:42.017 ******* 2026-02-02 05:41:51.892477 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.892491 | orchestrator | 2026-02-02 05:41:51.892504 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 05:41:51.892516 | orchestrator | Monday 02 February 2026 05:41:15 +0000 (0:00:01.118) 0:07:43.135 ******* 2026-02-02 05:41:51.892529 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.892542 | orchestrator | 2026-02-02 05:41:51.892555 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 05:41:51.892568 | orchestrator | Monday 02 February 2026 05:41:16 +0000 (0:00:01.127) 0:07:44.263 ******* 2026-02-02 05:41:51.892580 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-02 05:41:51.892593 | orchestrator | 2026-02-02 05:41:51.892606 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 05:41:51.892618 | orchestrator | Monday 02 February 2026 05:41:18 +0000 (0:00:01.464) 0:07:45.727 ******* 2026-02-02 05:41:51.892631 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:41:51.892644 | orchestrator | 2026-02-02 05:41:51.892657 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 05:41:51.892668 | orchestrator | Monday 02 February 2026 05:41:19 +0000 (0:00:01.701) 0:07:47.429 ******* 2026-02-02 05:41:51.892679 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 05:41:51.892689 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 05:41:51.892700 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 05:41:51.892711 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.892730 | orchestrator | 2026-02-02 05:41:51.892742 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 05:41:51.892752 | orchestrator | Monday 02 February 2026 05:41:21 +0000 (0:00:01.215) 0:07:48.644 ******* 2026-02-02 05:41:51.892763 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.892774 | orchestrator | 2026-02-02 05:41:51.892784 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 05:41:51.892795 | orchestrator | Monday 02 February 2026 05:41:22 +0000 (0:00:01.237) 0:07:49.882 ******* 2026-02-02 05:41:51.892806 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.892816 | orchestrator | 2026-02-02 05:41:51.892827 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 05:41:51.892838 | orchestrator | Monday 02 February 2026 05:41:23 +0000 (0:00:01.275) 0:07:51.157 ******* 2026-02-02 05:41:51.892848 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.892859 | orchestrator | 2026-02-02 05:41:51.892870 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 05:41:51.892897 | orchestrator | Monday 02 February 2026 05:41:24 +0000 (0:00:01.205) 0:07:52.363 ******* 2026-02-02 05:41:51.892909 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.892920 | orchestrator | 2026-02-02 05:41:51.892931 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 05:41:51.892941 | orchestrator | Monday 02 February 2026 05:41:26 +0000 (0:00:01.554) 0:07:53.917 ******* 2026-02-02 05:41:51.892952 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.892963 | orchestrator | 2026-02-02 05:41:51.892973 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 05:41:51.892984 | orchestrator | Monday 02 February 2026 05:41:27 +0000 (0:00:01.266) 0:07:55.184 ******* 2026-02-02 05:41:51.892995 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:41:51.893005 | orchestrator | 2026-02-02 05:41:51.893016 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 05:41:51.893027 | orchestrator | Monday 02 February 2026 05:41:30 +0000 (0:00:02.583) 0:07:57.770 ******* 2026-02-02 05:41:51.893038 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:41:51.893048 | orchestrator | 2026-02-02 05:41:51.893059 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 05:41:51.893070 | orchestrator | Monday 02 February 2026 05:41:31 +0000 (0:00:01.277) 0:07:59.048 ******* 2026-02-02 05:41:51.893080 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-02 05:41:51.893091 | orchestrator | 2026-02-02 05:41:51.893102 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 05:41:51.893112 | orchestrator | Monday 02 February 2026 05:41:32 +0000 (0:00:01.501) 0:08:00.550 ******* 2026-02-02 05:41:51.893123 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.893134 | orchestrator | 2026-02-02 05:41:51.893144 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 05:41:51.893155 | orchestrator | Monday 02 February 2026 05:41:34 +0000 (0:00:01.289) 0:08:01.839 ******* 2026-02-02 05:41:51.893166 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.893176 | orchestrator | 2026-02-02 05:41:51.893187 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 05:41:51.893198 | orchestrator | Monday 02 February 2026 05:41:35 +0000 (0:00:01.199) 0:08:03.039 ******* 2026-02-02 05:41:51.893214 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.893225 | orchestrator | 2026-02-02 05:41:51.893236 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 05:41:51.893246 | orchestrator | Monday 02 February 2026 05:41:36 +0000 (0:00:01.167) 0:08:04.207 ******* 2026-02-02 05:41:51.893257 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.893268 | orchestrator | 2026-02-02 05:41:51.893279 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 05:41:51.893290 | orchestrator | Monday 02 February 2026 05:41:37 +0000 (0:00:01.178) 0:08:05.386 ******* 2026-02-02 05:41:51.893307 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.893318 | orchestrator | 2026-02-02 05:41:51.893329 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 05:41:51.893339 | orchestrator | Monday 02 February 2026 05:41:38 +0000 (0:00:01.164) 0:08:06.550 ******* 2026-02-02 05:41:51.893350 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.893361 | orchestrator | 2026-02-02 05:41:51.893371 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 05:41:51.893382 | orchestrator | Monday 02 February 2026 05:41:40 +0000 (0:00:01.177) 0:08:07.728 ******* 2026-02-02 05:41:51.893393 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.893404 | orchestrator | 2026-02-02 05:41:51.893414 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 05:41:51.893425 | orchestrator | Monday 02 February 2026 05:41:41 +0000 (0:00:01.169) 0:08:08.898 ******* 2026-02-02 05:41:51.893436 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:41:51.893464 | orchestrator | 2026-02-02 05:41:51.893475 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 05:41:51.893486 | orchestrator | Monday 02 February 2026 05:41:42 +0000 (0:00:01.185) 0:08:10.083 ******* 2026-02-02 05:41:51.893496 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:41:51.893507 | orchestrator | 2026-02-02 05:41:51.893518 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 05:41:51.893529 | orchestrator | Monday 02 February 2026 05:41:43 +0000 (0:00:01.135) 0:08:11.219 ******* 2026-02-02 05:41:51.893539 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-02 05:41:51.893550 | orchestrator | 2026-02-02 05:41:51.893561 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 05:41:51.893572 | orchestrator | Monday 02 February 2026 05:41:45 +0000 (0:00:01.484) 0:08:12.703 ******* 2026-02-02 05:41:51.893583 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-02 05:41:51.893594 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-02 05:41:51.893605 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-02 05:41:51.893616 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-02 05:41:51.893627 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-02 05:41:51.893637 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-02 05:41:51.893648 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-02 05:41:51.893659 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-02 05:41:51.893669 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 05:41:51.893680 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 05:41:51.893691 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 05:41:51.893702 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 05:41:51.893713 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 05:41:51.893724 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 05:41:51.893741 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-02 05:42:39.951951 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-02 05:42:39.952080 | orchestrator | 2026-02-02 05:42:39.952091 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 05:42:39.952109 | orchestrator | Monday 02 February 2026 05:41:51 +0000 (0:00:06.754) 0:08:19.457 ******* 2026-02-02 05:42:39.952116 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952131 | orchestrator | 2026-02-02 05:42:39.952138 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 05:42:39.952144 | orchestrator | Monday 02 February 2026 05:41:52 +0000 (0:00:01.101) 0:08:20.558 ******* 2026-02-02 05:42:39.952151 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952174 | orchestrator | 2026-02-02 05:42:39.952181 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 05:42:39.952188 | orchestrator | Monday 02 February 2026 05:41:54 +0000 (0:00:01.126) 0:08:21.685 ******* 2026-02-02 05:42:39.952194 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952200 | orchestrator | 2026-02-02 05:42:39.952207 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 05:42:39.952213 | orchestrator | Monday 02 February 2026 05:41:55 +0000 (0:00:01.226) 0:08:22.912 ******* 2026-02-02 05:42:39.952219 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952225 | orchestrator | 2026-02-02 05:42:39.952232 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 05:42:39.952238 | orchestrator | Monday 02 February 2026 05:41:56 +0000 (0:00:01.165) 0:08:24.077 ******* 2026-02-02 05:42:39.952244 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952250 | orchestrator | 2026-02-02 05:42:39.952256 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 05:42:39.952262 | orchestrator | Monday 02 February 2026 05:41:57 +0000 (0:00:01.153) 0:08:25.231 ******* 2026-02-02 05:42:39.952268 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952274 | orchestrator | 2026-02-02 05:42:39.952281 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 05:42:39.952288 | orchestrator | Monday 02 February 2026 05:41:58 +0000 (0:00:01.097) 0:08:26.328 ******* 2026-02-02 05:42:39.952306 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952312 | orchestrator | 2026-02-02 05:42:39.952318 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 05:42:39.952325 | orchestrator | Monday 02 February 2026 05:41:59 +0000 (0:00:01.168) 0:08:27.496 ******* 2026-02-02 05:42:39.952331 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952337 | orchestrator | 2026-02-02 05:42:39.952343 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 05:42:39.952350 | orchestrator | Monday 02 February 2026 05:42:01 +0000 (0:00:01.223) 0:08:28.720 ******* 2026-02-02 05:42:39.952356 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952362 | orchestrator | 2026-02-02 05:42:39.952368 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 05:42:39.952374 | orchestrator | Monday 02 February 2026 05:42:02 +0000 (0:00:01.151) 0:08:29.872 ******* 2026-02-02 05:42:39.952380 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952387 | orchestrator | 2026-02-02 05:42:39.952393 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 05:42:39.952399 | orchestrator | Monday 02 February 2026 05:42:03 +0000 (0:00:01.141) 0:08:31.013 ******* 2026-02-02 05:42:39.952405 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952412 | orchestrator | 2026-02-02 05:42:39.952519 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 05:42:39.952530 | orchestrator | Monday 02 February 2026 05:42:04 +0000 (0:00:01.148) 0:08:32.162 ******* 2026-02-02 05:42:39.952537 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952544 | orchestrator | 2026-02-02 05:42:39.952552 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 05:42:39.952559 | orchestrator | Monday 02 February 2026 05:42:05 +0000 (0:00:01.116) 0:08:33.279 ******* 2026-02-02 05:42:39.952566 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952574 | orchestrator | 2026-02-02 05:42:39.952581 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 05:42:39.952588 | orchestrator | Monday 02 February 2026 05:42:06 +0000 (0:00:01.200) 0:08:34.480 ******* 2026-02-02 05:42:39.952595 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952602 | orchestrator | 2026-02-02 05:42:39.952609 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 05:42:39.952617 | orchestrator | Monday 02 February 2026 05:42:08 +0000 (0:00:01.179) 0:08:35.660 ******* 2026-02-02 05:42:39.952630 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952638 | orchestrator | 2026-02-02 05:42:39.952645 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 05:42:39.952652 | orchestrator | Monday 02 February 2026 05:42:09 +0000 (0:00:01.230) 0:08:36.890 ******* 2026-02-02 05:42:39.952659 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952666 | orchestrator | 2026-02-02 05:42:39.952673 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 05:42:39.952680 | orchestrator | Monday 02 February 2026 05:42:10 +0000 (0:00:01.159) 0:08:38.049 ******* 2026-02-02 05:42:39.952687 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952694 | orchestrator | 2026-02-02 05:42:39.952702 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 05:42:39.952711 | orchestrator | Monday 02 February 2026 05:42:11 +0000 (0:00:01.110) 0:08:39.160 ******* 2026-02-02 05:42:39.952718 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952725 | orchestrator | 2026-02-02 05:42:39.952733 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 05:42:39.952741 | orchestrator | Monday 02 February 2026 05:42:12 +0000 (0:00:01.141) 0:08:40.302 ******* 2026-02-02 05:42:39.952748 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952755 | orchestrator | 2026-02-02 05:42:39.952777 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 05:42:39.952784 | orchestrator | Monday 02 February 2026 05:42:13 +0000 (0:00:01.115) 0:08:41.418 ******* 2026-02-02 05:42:39.952792 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952799 | orchestrator | 2026-02-02 05:42:39.952806 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 05:42:39.952813 | orchestrator | Monday 02 February 2026 05:42:14 +0000 (0:00:01.151) 0:08:42.570 ******* 2026-02-02 05:42:39.952819 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952825 | orchestrator | 2026-02-02 05:42:39.952831 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 05:42:39.952837 | orchestrator | Monday 02 February 2026 05:42:16 +0000 (0:00:01.168) 0:08:43.738 ******* 2026-02-02 05:42:39.952844 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 05:42:39.952850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 05:42:39.952856 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 05:42:39.952862 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952868 | orchestrator | 2026-02-02 05:42:39.952874 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 05:42:39.952880 | orchestrator | Monday 02 February 2026 05:42:17 +0000 (0:00:01.796) 0:08:45.535 ******* 2026-02-02 05:42:39.952886 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 05:42:39.952892 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 05:42:39.952899 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 05:42:39.952905 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952911 | orchestrator | 2026-02-02 05:42:39.952917 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 05:42:39.952923 | orchestrator | Monday 02 February 2026 05:42:19 +0000 (0:00:01.452) 0:08:46.987 ******* 2026-02-02 05:42:39.952929 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 05:42:39.952935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 05:42:39.952941 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 05:42:39.952952 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952958 | orchestrator | 2026-02-02 05:42:39.952964 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 05:42:39.952971 | orchestrator | Monday 02 February 2026 05:42:20 +0000 (0:00:01.461) 0:08:48.449 ******* 2026-02-02 05:42:39.952982 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.952988 | orchestrator | 2026-02-02 05:42:39.952994 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 05:42:39.953000 | orchestrator | Monday 02 February 2026 05:42:21 +0000 (0:00:01.126) 0:08:49.575 ******* 2026-02-02 05:42:39.953006 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-02 05:42:39.953012 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.953018 | orchestrator | 2026-02-02 05:42:39.953024 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 05:42:39.953030 | orchestrator | Monday 02 February 2026 05:42:23 +0000 (0:00:01.361) 0:08:50.936 ******* 2026-02-02 05:42:39.953037 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:42:39.953043 | orchestrator | 2026-02-02 05:42:39.953049 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-02 05:42:39.953055 | orchestrator | Monday 02 February 2026 05:42:25 +0000 (0:00:01.751) 0:08:52.688 ******* 2026-02-02 05:42:39.953061 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:42:39.953067 | orchestrator | 2026-02-02 05:42:39.953073 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-02 05:42:39.953079 | orchestrator | Monday 02 February 2026 05:42:26 +0000 (0:00:01.153) 0:08:53.841 ******* 2026-02-02 05:42:39.953085 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-02 05:42:39.953092 | orchestrator | 2026-02-02 05:42:39.953098 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-02 05:42:39.953105 | orchestrator | Monday 02 February 2026 05:42:27 +0000 (0:00:01.478) 0:08:55.320 ******* 2026-02-02 05:42:39.953111 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-02 05:42:39.953117 | orchestrator | 2026-02-02 05:42:39.953123 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-02 05:42:39.953129 | orchestrator | Monday 02 February 2026 05:42:31 +0000 (0:00:03.509) 0:08:58.829 ******* 2026-02-02 05:42:39.953135 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:42:39.953141 | orchestrator | 2026-02-02 05:42:39.953147 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-02 05:42:39.953153 | orchestrator | Monday 02 February 2026 05:42:32 +0000 (0:00:01.157) 0:08:59.987 ******* 2026-02-02 05:42:39.953159 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:42:39.953165 | orchestrator | 2026-02-02 05:42:39.953171 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-02 05:42:39.953178 | orchestrator | Monday 02 February 2026 05:42:33 +0000 (0:00:01.251) 0:09:01.238 ******* 2026-02-02 05:42:39.953184 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:42:39.953190 | orchestrator | 2026-02-02 05:42:39.953196 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-02 05:42:39.953215 | orchestrator | Monday 02 February 2026 05:42:34 +0000 (0:00:01.163) 0:09:02.402 ******* 2026-02-02 05:42:39.953221 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:42:39.953227 | orchestrator | 2026-02-02 05:42:39.953234 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-02 05:42:39.953240 | orchestrator | Monday 02 February 2026 05:42:36 +0000 (0:00:01.990) 0:09:04.392 ******* 2026-02-02 05:42:39.953246 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:42:39.953252 | orchestrator | 2026-02-02 05:42:39.953258 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-02 05:42:39.953264 | orchestrator | Monday 02 February 2026 05:42:38 +0000 (0:00:01.625) 0:09:06.017 ******* 2026-02-02 05:42:39.953270 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:42:39.953277 | orchestrator | 2026-02-02 05:42:39.953286 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-02 05:43:37.694710 | orchestrator | Monday 02 February 2026 05:42:39 +0000 (0:00:01.504) 0:09:07.522 ******* 2026-02-02 05:43:37.694849 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:43:37.694877 | orchestrator | 2026-02-02 05:43:37.694899 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-02 05:43:37.694949 | orchestrator | Monday 02 February 2026 05:42:41 +0000 (0:00:01.475) 0:09:08.998 ******* 2026-02-02 05:43:37.694969 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:43:37.694989 | orchestrator | 2026-02-02 05:43:37.695007 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-02 05:43:37.695025 | orchestrator | Monday 02 February 2026 05:42:43 +0000 (0:00:01.725) 0:09:10.724 ******* 2026-02-02 05:43:37.695043 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:43:37.695063 | orchestrator | 2026-02-02 05:43:37.695082 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-02 05:43:37.695124 | orchestrator | Monday 02 February 2026 05:42:44 +0000 (0:00:01.718) 0:09:12.442 ******* 2026-02-02 05:43:37.695143 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-02 05:43:37.695162 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 05:43:37.695180 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 05:43:37.695198 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-02 05:43:37.695216 | orchestrator | 2026-02-02 05:43:37.695235 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-02 05:43:37.695254 | orchestrator | Monday 02 February 2026 05:42:48 +0000 (0:00:03.995) 0:09:16.438 ******* 2026-02-02 05:43:37.695270 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:43:37.695283 | orchestrator | 2026-02-02 05:43:37.695296 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-02 05:43:37.695309 | orchestrator | Monday 02 February 2026 05:42:50 +0000 (0:00:02.046) 0:09:18.484 ******* 2026-02-02 05:43:37.695321 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:43:37.695333 | orchestrator | 2026-02-02 05:43:37.695345 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-02 05:43:37.695372 | orchestrator | Monday 02 February 2026 05:42:52 +0000 (0:00:01.123) 0:09:19.608 ******* 2026-02-02 05:43:37.695385 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:43:37.695397 | orchestrator | 2026-02-02 05:43:37.695410 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-02 05:43:37.695422 | orchestrator | Monday 02 February 2026 05:42:53 +0000 (0:00:01.295) 0:09:20.903 ******* 2026-02-02 05:43:37.695435 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:43:37.695448 | orchestrator | 2026-02-02 05:43:37.695460 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-02 05:43:37.695472 | orchestrator | Monday 02 February 2026 05:42:55 +0000 (0:00:02.361) 0:09:23.264 ******* 2026-02-02 05:43:37.695508 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:43:37.695521 | orchestrator | 2026-02-02 05:43:37.695534 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-02 05:43:37.695547 | orchestrator | Monday 02 February 2026 05:42:57 +0000 (0:00:01.561) 0:09:24.825 ******* 2026-02-02 05:43:37.695559 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:43:37.695571 | orchestrator | 2026-02-02 05:43:37.695582 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-02 05:43:37.695593 | orchestrator | Monday 02 February 2026 05:42:58 +0000 (0:00:01.217) 0:09:26.044 ******* 2026-02-02 05:43:37.695603 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-02 05:43:37.695614 | orchestrator | 2026-02-02 05:43:37.695625 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-02 05:43:37.695636 | orchestrator | Monday 02 February 2026 05:42:59 +0000 (0:00:01.484) 0:09:27.528 ******* 2026-02-02 05:43:37.695646 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:43:37.695657 | orchestrator | 2026-02-02 05:43:37.695667 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-02 05:43:37.695678 | orchestrator | Monday 02 February 2026 05:43:01 +0000 (0:00:01.140) 0:09:28.669 ******* 2026-02-02 05:43:37.695689 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:43:37.695700 | orchestrator | 2026-02-02 05:43:37.695720 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-02 05:43:37.695731 | orchestrator | Monday 02 February 2026 05:43:02 +0000 (0:00:01.095) 0:09:29.765 ******* 2026-02-02 05:43:37.695742 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-02 05:43:37.695752 | orchestrator | 2026-02-02 05:43:37.695763 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-02 05:43:37.695773 | orchestrator | Monday 02 February 2026 05:43:03 +0000 (0:00:01.481) 0:09:31.247 ******* 2026-02-02 05:43:37.695784 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:43:37.695795 | orchestrator | 2026-02-02 05:43:37.695805 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-02 05:43:37.695816 | orchestrator | Monday 02 February 2026 05:43:05 +0000 (0:00:02.329) 0:09:33.577 ******* 2026-02-02 05:43:37.695826 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:43:37.695837 | orchestrator | 2026-02-02 05:43:37.695847 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-02 05:43:37.695858 | orchestrator | Monday 02 February 2026 05:43:07 +0000 (0:00:01.925) 0:09:35.503 ******* 2026-02-02 05:43:37.695868 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:43:37.695879 | orchestrator | 2026-02-02 05:43:37.695889 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-02 05:43:37.695900 | orchestrator | Monday 02 February 2026 05:43:10 +0000 (0:00:02.464) 0:09:37.967 ******* 2026-02-02 05:43:37.695910 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:43:37.695921 | orchestrator | 2026-02-02 05:43:37.695932 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-02 05:43:37.695942 | orchestrator | Monday 02 February 2026 05:43:13 +0000 (0:00:03.217) 0:09:41.185 ******* 2026-02-02 05:43:37.695953 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-02 05:43:37.695963 | orchestrator | 2026-02-02 05:43:37.695993 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-02 05:43:37.696004 | orchestrator | Monday 02 February 2026 05:43:15 +0000 (0:00:01.629) 0:09:42.814 ******* 2026-02-02 05:43:37.696014 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:43:37.696025 | orchestrator | 2026-02-02 05:43:37.696036 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-02 05:43:37.696046 | orchestrator | Monday 02 February 2026 05:43:17 +0000 (0:00:02.217) 0:09:45.032 ******* 2026-02-02 05:43:37.696057 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:43:37.696067 | orchestrator | 2026-02-02 05:43:37.696078 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-02 05:43:37.696088 | orchestrator | Monday 02 February 2026 05:43:20 +0000 (0:00:03.051) 0:09:48.084 ******* 2026-02-02 05:43:37.696099 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:43:37.696109 | orchestrator | 2026-02-02 05:43:37.696120 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-02 05:43:37.696130 | orchestrator | Monday 02 February 2026 05:43:21 +0000 (0:00:01.114) 0:09:49.199 ******* 2026-02-02 05:43:37.696143 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-02 05:43:37.696158 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-02 05:43:37.696174 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-02 05:43:37.696193 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-02 05:43:37.696206 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-02 05:43:37.696217 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}])  2026-02-02 05:43:37.696257 | orchestrator | 2026-02-02 05:43:37.696268 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-02 05:43:37.696279 | orchestrator | Monday 02 February 2026 05:43:31 +0000 (0:00:09.910) 0:09:59.110 ******* 2026-02-02 05:43:37.696290 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:43:37.696300 | orchestrator | 2026-02-02 05:43:37.696311 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 05:43:37.696322 | orchestrator | Monday 02 February 2026 05:43:34 +0000 (0:00:02.537) 0:10:01.647 ******* 2026-02-02 05:43:37.696332 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:43:37.696343 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-02 05:43:37.696354 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-02 05:43:37.696365 | orchestrator | 2026-02-02 05:43:37.696376 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 05:43:37.696386 | orchestrator | Monday 02 February 2026 05:43:36 +0000 (0:00:02.249) 0:10:03.896 ******* 2026-02-02 05:43:37.696397 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 05:43:37.696408 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 05:43:37.696418 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 05:43:37.696429 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:43:37.696439 | orchestrator | 2026-02-02 05:43:37.696450 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-02 05:43:37.696468 | orchestrator | Monday 02 February 2026 05:43:37 +0000 (0:00:01.363) 0:10:05.260 ******* 2026-02-02 05:44:15.335155 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:44:15.335272 | orchestrator | 2026-02-02 05:44:15.335284 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-02 05:44:15.335292 | orchestrator | Monday 02 February 2026 05:43:38 +0000 (0:00:01.126) 0:10:06.387 ******* 2026-02-02 05:44:15.335299 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:44:15.335307 | orchestrator | 2026-02-02 05:44:15.335313 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 05:44:15.335319 | orchestrator | Monday 02 February 2026 05:43:41 +0000 (0:00:02.255) 0:10:08.642 ******* 2026-02-02 05:44:15.335325 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:44:15.335332 | orchestrator | 2026-02-02 05:44:15.335338 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-02 05:44:15.335345 | orchestrator | Monday 02 February 2026 05:43:42 +0000 (0:00:01.190) 0:10:09.833 ******* 2026-02-02 05:44:15.335371 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:44:15.335377 | orchestrator | 2026-02-02 05:44:15.335384 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-02 05:44:15.335390 | orchestrator | Monday 02 February 2026 05:43:43 +0000 (0:00:01.108) 0:10:10.941 ******* 2026-02-02 05:44:15.335396 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:44:15.335403 | orchestrator | 2026-02-02 05:44:15.335410 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-02 05:44:15.335416 | orchestrator | Monday 02 February 2026 05:43:44 +0000 (0:00:01.149) 0:10:12.091 ******* 2026-02-02 05:44:15.335422 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:44:15.335428 | orchestrator | 2026-02-02 05:44:15.335435 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-02 05:44:15.335441 | orchestrator | Monday 02 February 2026 05:43:45 +0000 (0:00:01.269) 0:10:13.361 ******* 2026-02-02 05:44:15.335447 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:44:15.335454 | orchestrator | 2026-02-02 05:44:15.335460 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-02 05:44:15.335466 | orchestrator | Monday 02 February 2026 05:43:46 +0000 (0:00:01.125) 0:10:14.486 ******* 2026-02-02 05:44:15.335473 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:44:15.335479 | orchestrator | 2026-02-02 05:44:15.335550 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-02 05:44:15.335558 | orchestrator | Monday 02 February 2026 05:43:48 +0000 (0:00:01.149) 0:10:15.635 ******* 2026-02-02 05:44:15.335564 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:44:15.335570 | orchestrator | 2026-02-02 05:44:15.335576 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-02 05:44:15.335581 | orchestrator | 2026-02-02 05:44:15.335587 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-02 05:44:15.335593 | orchestrator | Monday 02 February 2026 05:43:49 +0000 (0:00:00.980) 0:10:16.616 ******* 2026-02-02 05:44:15.335599 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:15.335604 | orchestrator | 2026-02-02 05:44:15.335610 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-02 05:44:15.335616 | orchestrator | Monday 02 February 2026 05:43:50 +0000 (0:00:01.125) 0:10:17.742 ******* 2026-02-02 05:44:15.335621 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:15.335627 | orchestrator | 2026-02-02 05:44:15.335633 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-02 05:44:15.335638 | orchestrator | Monday 02 February 2026 05:43:50 +0000 (0:00:00.772) 0:10:18.515 ******* 2026-02-02 05:44:15.335644 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:15.335651 | orchestrator | 2026-02-02 05:44:15.335658 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-02 05:44:15.335665 | orchestrator | Monday 02 February 2026 05:43:51 +0000 (0:00:00.782) 0:10:19.298 ******* 2026-02-02 05:44:15.335671 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:15.335677 | orchestrator | 2026-02-02 05:44:15.335684 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 05:44:15.335691 | orchestrator | Monday 02 February 2026 05:43:52 +0000 (0:00:00.769) 0:10:20.068 ******* 2026-02-02 05:44:15.335698 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-02 05:44:15.335705 | orchestrator | 2026-02-02 05:44:15.335712 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 05:44:15.335718 | orchestrator | Monday 02 February 2026 05:43:53 +0000 (0:00:01.269) 0:10:21.337 ******* 2026-02-02 05:44:15.335725 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:15.335732 | orchestrator | 2026-02-02 05:44:15.335739 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 05:44:15.335746 | orchestrator | Monday 02 February 2026 05:43:55 +0000 (0:00:01.519) 0:10:22.857 ******* 2026-02-02 05:44:15.335753 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:15.335760 | orchestrator | 2026-02-02 05:44:15.335776 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 05:44:15.335783 | orchestrator | Monday 02 February 2026 05:43:56 +0000 (0:00:01.193) 0:10:24.050 ******* 2026-02-02 05:44:15.335789 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:15.335796 | orchestrator | 2026-02-02 05:44:15.335804 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 05:44:15.335811 | orchestrator | Monday 02 February 2026 05:43:57 +0000 (0:00:01.474) 0:10:25.524 ******* 2026-02-02 05:44:15.335819 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:15.335826 | orchestrator | 2026-02-02 05:44:15.335832 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 05:44:15.335839 | orchestrator | Monday 02 February 2026 05:43:59 +0000 (0:00:01.147) 0:10:26.672 ******* 2026-02-02 05:44:15.335846 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:15.335853 | orchestrator | 2026-02-02 05:44:15.335860 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 05:44:15.335868 | orchestrator | Monday 02 February 2026 05:44:00 +0000 (0:00:01.168) 0:10:27.840 ******* 2026-02-02 05:44:15.335874 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:15.335881 | orchestrator | 2026-02-02 05:44:15.335888 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 05:44:15.335896 | orchestrator | Monday 02 February 2026 05:44:01 +0000 (0:00:01.169) 0:10:29.010 ******* 2026-02-02 05:44:15.335919 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:15.335926 | orchestrator | 2026-02-02 05:44:15.335933 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 05:44:15.335940 | orchestrator | Monday 02 February 2026 05:44:02 +0000 (0:00:01.174) 0:10:30.184 ******* 2026-02-02 05:44:15.335946 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:15.335953 | orchestrator | 2026-02-02 05:44:15.335960 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 05:44:15.335967 | orchestrator | Monday 02 February 2026 05:44:03 +0000 (0:00:01.127) 0:10:31.312 ******* 2026-02-02 05:44:15.335974 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 05:44:15.335980 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 05:44:15.335987 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:44:15.335994 | orchestrator | 2026-02-02 05:44:15.336001 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 05:44:15.336009 | orchestrator | Monday 02 February 2026 05:44:05 +0000 (0:00:02.055) 0:10:33.367 ******* 2026-02-02 05:44:15.336016 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:15.336022 | orchestrator | 2026-02-02 05:44:15.336029 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 05:44:15.336036 | orchestrator | Monday 02 February 2026 05:44:07 +0000 (0:00:01.225) 0:10:34.593 ******* 2026-02-02 05:44:15.336042 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 05:44:15.336049 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 05:44:15.336056 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:44:15.336063 | orchestrator | 2026-02-02 05:44:15.336070 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 05:44:15.336076 | orchestrator | Monday 02 February 2026 05:44:10 +0000 (0:00:03.325) 0:10:37.919 ******* 2026-02-02 05:44:15.336083 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 05:44:15.336093 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 05:44:15.336099 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 05:44:15.336105 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:15.336112 | orchestrator | 2026-02-02 05:44:15.336118 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 05:44:15.336125 | orchestrator | Monday 02 February 2026 05:44:12 +0000 (0:00:01.766) 0:10:39.685 ******* 2026-02-02 05:44:15.336137 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 05:44:15.336147 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 05:44:15.336153 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 05:44:15.336161 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:15.336168 | orchestrator | 2026-02-02 05:44:15.336174 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 05:44:15.336181 | orchestrator | Monday 02 February 2026 05:44:14 +0000 (0:00:01.995) 0:10:41.681 ******* 2026-02-02 05:44:15.336190 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:44:15.336200 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:44:15.336207 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:44:15.336213 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:15.336219 | orchestrator | 2026-02-02 05:44:15.336229 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 05:44:34.898902 | orchestrator | Monday 02 February 2026 05:44:15 +0000 (0:00:01.223) 0:10:42.905 ******* 2026-02-02 05:44:34.899049 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 05:44:07.570691', 'end': '2026-02-02 05:44:07.627612', 'delta': '0:00:00.056921', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 05:44:34.899103 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'a42e682d4965', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 05:44:08.493129', 'end': '2026-02-02 05:44:08.558870', 'delta': '0:00:00.065741', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a42e682d4965'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 05:44:34.899154 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '39d29fabc2d2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 05:44:09.083972', 'end': '2026-02-02 05:44:09.131550', 'delta': '0:00:00.047578', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['39d29fabc2d2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 05:44:34.899168 | orchestrator | 2026-02-02 05:44:34.899181 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 05:44:34.899192 | orchestrator | Monday 02 February 2026 05:44:16 +0000 (0:00:01.212) 0:10:44.117 ******* 2026-02-02 05:44:34.899203 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:34.899215 | orchestrator | 2026-02-02 05:44:34.899226 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 05:44:34.899237 | orchestrator | Monday 02 February 2026 05:44:17 +0000 (0:00:01.273) 0:10:45.391 ******* 2026-02-02 05:44:34.899248 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:34.899260 | orchestrator | 2026-02-02 05:44:34.899271 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 05:44:34.899282 | orchestrator | Monday 02 February 2026 05:44:19 +0000 (0:00:01.281) 0:10:46.673 ******* 2026-02-02 05:44:34.899293 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:34.899303 | orchestrator | 2026-02-02 05:44:34.899314 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 05:44:34.899325 | orchestrator | Monday 02 February 2026 05:44:20 +0000 (0:00:01.136) 0:10:47.810 ******* 2026-02-02 05:44:34.899335 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-02 05:44:34.899346 | orchestrator | 2026-02-02 05:44:34.899357 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 05:44:34.899367 | orchestrator | Monday 02 February 2026 05:44:22 +0000 (0:00:01.948) 0:10:49.758 ******* 2026-02-02 05:44:34.899378 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:44:34.899388 | orchestrator | 2026-02-02 05:44:34.899399 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 05:44:34.899410 | orchestrator | Monday 02 February 2026 05:44:23 +0000 (0:00:01.144) 0:10:50.903 ******* 2026-02-02 05:44:34.899420 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:34.899432 | orchestrator | 2026-02-02 05:44:34.899444 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 05:44:34.899457 | orchestrator | Monday 02 February 2026 05:44:24 +0000 (0:00:01.110) 0:10:52.014 ******* 2026-02-02 05:44:34.899469 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:34.899481 | orchestrator | 2026-02-02 05:44:34.899494 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 05:44:34.899534 | orchestrator | Monday 02 February 2026 05:44:25 +0000 (0:00:01.236) 0:10:53.251 ******* 2026-02-02 05:44:34.899548 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:34.899560 | orchestrator | 2026-02-02 05:44:34.899572 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 05:44:34.899604 | orchestrator | Monday 02 February 2026 05:44:26 +0000 (0:00:01.137) 0:10:54.388 ******* 2026-02-02 05:44:34.899617 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:34.899629 | orchestrator | 2026-02-02 05:44:34.899642 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 05:44:34.899665 | orchestrator | Monday 02 February 2026 05:44:27 +0000 (0:00:01.100) 0:10:55.488 ******* 2026-02-02 05:44:34.899678 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:34.899690 | orchestrator | 2026-02-02 05:44:34.899703 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 05:44:34.899717 | orchestrator | Monday 02 February 2026 05:44:29 +0000 (0:00:01.143) 0:10:56.632 ******* 2026-02-02 05:44:34.899729 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:34.899742 | orchestrator | 2026-02-02 05:44:34.899754 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 05:44:34.899766 | orchestrator | Monday 02 February 2026 05:44:30 +0000 (0:00:01.193) 0:10:57.825 ******* 2026-02-02 05:44:34.899778 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:34.899791 | orchestrator | 2026-02-02 05:44:34.899804 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 05:44:34.899816 | orchestrator | Monday 02 February 2026 05:44:31 +0000 (0:00:01.156) 0:10:58.982 ******* 2026-02-02 05:44:34.899827 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:34.899838 | orchestrator | 2026-02-02 05:44:34.899848 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 05:44:34.899860 | orchestrator | Monday 02 February 2026 05:44:32 +0000 (0:00:01.117) 0:11:00.099 ******* 2026-02-02 05:44:34.899870 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:34.899881 | orchestrator | 2026-02-02 05:44:34.899892 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 05:44:34.899903 | orchestrator | Monday 02 February 2026 05:44:33 +0000 (0:00:01.115) 0:11:01.215 ******* 2026-02-02 05:44:34.899921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:44:34.899935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:44:34.899946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:44:34.899958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 05:44:34.899971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:44:34.899989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:44:34.900008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:44:36.112187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2343887', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:44:36.112284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:44:36.112300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:44:36.112311 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:44:36.112342 | orchestrator | 2026-02-02 05:44:36.112353 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 05:44:36.112364 | orchestrator | Monday 02 February 2026 05:44:34 +0000 (0:00:01.245) 0:11:02.460 ******* 2026-02-02 05:44:36.112376 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:44:36.112405 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:44:36.112416 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:44:36.112433 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:44:36.112446 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:44:36.112464 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:44:36.112552 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:44:36.112601 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2343887', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:45:07.081839 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:45:07.081949 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:45:07.081984 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:07.081994 | orchestrator | 2026-02-02 05:45:07.082003 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 05:45:07.082011 | orchestrator | Monday 02 February 2026 05:44:36 +0000 (0:00:01.227) 0:11:03.688 ******* 2026-02-02 05:45:07.082069 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:07.082077 | orchestrator | 2026-02-02 05:45:07.082085 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 05:45:07.082092 | orchestrator | Monday 02 February 2026 05:44:37 +0000 (0:00:01.526) 0:11:05.214 ******* 2026-02-02 05:45:07.082098 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:07.082105 | orchestrator | 2026-02-02 05:45:07.082112 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 05:45:07.082119 | orchestrator | Monday 02 February 2026 05:44:38 +0000 (0:00:01.137) 0:11:06.351 ******* 2026-02-02 05:45:07.082125 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:07.082133 | orchestrator | 2026-02-02 05:45:07.082139 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 05:45:07.082146 | orchestrator | Monday 02 February 2026 05:44:40 +0000 (0:00:01.503) 0:11:07.855 ******* 2026-02-02 05:45:07.082153 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:07.082159 | orchestrator | 2026-02-02 05:45:07.082165 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 05:45:07.082172 | orchestrator | Monday 02 February 2026 05:44:41 +0000 (0:00:01.137) 0:11:08.993 ******* 2026-02-02 05:45:07.082179 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:07.082185 | orchestrator | 2026-02-02 05:45:07.082192 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 05:45:07.082199 | orchestrator | Monday 02 February 2026 05:44:42 +0000 (0:00:01.214) 0:11:10.207 ******* 2026-02-02 05:45:07.082205 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:07.082213 | orchestrator | 2026-02-02 05:45:07.082219 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 05:45:07.082225 | orchestrator | Monday 02 February 2026 05:44:43 +0000 (0:00:01.119) 0:11:11.327 ******* 2026-02-02 05:45:07.082232 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-02 05:45:07.082239 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 05:45:07.082246 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-02 05:45:07.082253 | orchestrator | 2026-02-02 05:45:07.082260 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 05:45:07.082266 | orchestrator | Monday 02 February 2026 05:44:45 +0000 (0:00:02.040) 0:11:13.368 ******* 2026-02-02 05:45:07.082272 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 05:45:07.082279 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 05:45:07.082286 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 05:45:07.082292 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:07.082298 | orchestrator | 2026-02-02 05:45:07.082305 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 05:45:07.082312 | orchestrator | Monday 02 February 2026 05:44:47 +0000 (0:00:01.248) 0:11:14.616 ******* 2026-02-02 05:45:07.082319 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:07.082325 | orchestrator | 2026-02-02 05:45:07.082344 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 05:45:07.082350 | orchestrator | Monday 02 February 2026 05:44:48 +0000 (0:00:01.142) 0:11:15.759 ******* 2026-02-02 05:45:07.082357 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 05:45:07.082374 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 05:45:07.082381 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:45:07.082387 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 05:45:07.082394 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 05:45:07.082400 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 05:45:07.082425 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 05:45:07.082432 | orchestrator | 2026-02-02 05:45:07.082438 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 05:45:07.082444 | orchestrator | Monday 02 February 2026 05:44:49 +0000 (0:00:01.820) 0:11:17.579 ******* 2026-02-02 05:45:07.082450 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 05:45:07.082456 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 05:45:07.082463 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:45:07.082469 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 05:45:07.082475 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 05:45:07.082481 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 05:45:07.082488 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 05:45:07.082493 | orchestrator | 2026-02-02 05:45:07.082500 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-02 05:45:07.082536 | orchestrator | Monday 02 February 2026 05:44:52 +0000 (0:00:02.328) 0:11:19.908 ******* 2026-02-02 05:45:07.082543 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:07.082550 | orchestrator | 2026-02-02 05:45:07.082556 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-02 05:45:07.082562 | orchestrator | Monday 02 February 2026 05:44:53 +0000 (0:00:00.871) 0:11:20.780 ******* 2026-02-02 05:45:07.082568 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:07.082574 | orchestrator | 2026-02-02 05:45:07.082581 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-02 05:45:07.082587 | orchestrator | Monday 02 February 2026 05:44:54 +0000 (0:00:00.879) 0:11:21.659 ******* 2026-02-02 05:45:07.082593 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:07.082599 | orchestrator | 2026-02-02 05:45:07.082606 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-02 05:45:07.082613 | orchestrator | Monday 02 February 2026 05:44:54 +0000 (0:00:00.752) 0:11:22.412 ******* 2026-02-02 05:45:07.082620 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:07.082627 | orchestrator | 2026-02-02 05:45:07.082633 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-02 05:45:07.082640 | orchestrator | Monday 02 February 2026 05:44:55 +0000 (0:00:00.907) 0:11:23.319 ******* 2026-02-02 05:45:07.082646 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:07.082653 | orchestrator | 2026-02-02 05:45:07.082659 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-02 05:45:07.082666 | orchestrator | Monday 02 February 2026 05:44:56 +0000 (0:00:00.776) 0:11:24.096 ******* 2026-02-02 05:45:07.082673 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 05:45:07.082680 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 05:45:07.082686 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 05:45:07.082694 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:07.082698 | orchestrator | 2026-02-02 05:45:07.082702 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-02 05:45:07.082715 | orchestrator | Monday 02 February 2026 05:44:57 +0000 (0:00:01.050) 0:11:25.146 ******* 2026-02-02 05:45:07.082719 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-02 05:45:07.082723 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-02 05:45:07.082729 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-02 05:45:07.082736 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-02 05:45:07.082743 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-02 05:45:07.082749 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-02 05:45:07.082753 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:07.082757 | orchestrator | 2026-02-02 05:45:07.082761 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-02 05:45:07.082765 | orchestrator | Monday 02 February 2026 05:44:59 +0000 (0:00:01.660) 0:11:26.806 ******* 2026-02-02 05:45:07.082769 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 05:45:07.082773 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 05:45:07.082778 | orchestrator | 2026-02-02 05:45:07.082782 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-02 05:45:07.082787 | orchestrator | Monday 02 February 2026 05:45:02 +0000 (0:00:03.251) 0:11:30.058 ******* 2026-02-02 05:45:07.082794 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:45:07.082800 | orchestrator | 2026-02-02 05:45:07.082813 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 05:45:07.082820 | orchestrator | Monday 02 February 2026 05:45:04 +0000 (0:00:02.172) 0:11:32.230 ******* 2026-02-02 05:45:07.082826 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-02 05:45:07.082834 | orchestrator | 2026-02-02 05:45:07.082841 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 05:45:07.082847 | orchestrator | Monday 02 February 2026 05:45:05 +0000 (0:00:01.292) 0:11:33.523 ******* 2026-02-02 05:45:07.082853 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-02 05:45:07.082859 | orchestrator | 2026-02-02 05:45:07.082865 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 05:45:07.082880 | orchestrator | Monday 02 February 2026 05:45:07 +0000 (0:00:01.125) 0:11:34.649 ******* 2026-02-02 05:45:49.775045 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:49.775227 | orchestrator | 2026-02-02 05:45:49.775249 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 05:45:49.775263 | orchestrator | Monday 02 February 2026 05:45:08 +0000 (0:00:01.534) 0:11:36.184 ******* 2026-02-02 05:45:49.775274 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.775299 | orchestrator | 2026-02-02 05:45:49.775311 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 05:45:49.775322 | orchestrator | Monday 02 February 2026 05:45:09 +0000 (0:00:01.113) 0:11:37.297 ******* 2026-02-02 05:45:49.775333 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.775344 | orchestrator | 2026-02-02 05:45:49.775355 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 05:45:49.775366 | orchestrator | Monday 02 February 2026 05:45:10 +0000 (0:00:01.173) 0:11:38.471 ******* 2026-02-02 05:45:49.775376 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.775387 | orchestrator | 2026-02-02 05:45:49.775398 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 05:45:49.775409 | orchestrator | Monday 02 February 2026 05:45:12 +0000 (0:00:01.184) 0:11:39.655 ******* 2026-02-02 05:45:49.775420 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:49.775431 | orchestrator | 2026-02-02 05:45:49.775442 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 05:45:49.775476 | orchestrator | Monday 02 February 2026 05:45:13 +0000 (0:00:01.582) 0:11:41.237 ******* 2026-02-02 05:45:49.775490 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.775502 | orchestrator | 2026-02-02 05:45:49.775515 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 05:45:49.775557 | orchestrator | Monday 02 February 2026 05:45:14 +0000 (0:00:01.163) 0:11:42.401 ******* 2026-02-02 05:45:49.775569 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.775582 | orchestrator | 2026-02-02 05:45:49.775594 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 05:45:49.775607 | orchestrator | Monday 02 February 2026 05:45:15 +0000 (0:00:01.172) 0:11:43.574 ******* 2026-02-02 05:45:49.775619 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:49.775631 | orchestrator | 2026-02-02 05:45:49.775644 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 05:45:49.775657 | orchestrator | Monday 02 February 2026 05:45:17 +0000 (0:00:01.557) 0:11:45.131 ******* 2026-02-02 05:45:49.775669 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:49.775682 | orchestrator | 2026-02-02 05:45:49.775694 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 05:45:49.775707 | orchestrator | Monday 02 February 2026 05:45:19 +0000 (0:00:01.573) 0:11:46.705 ******* 2026-02-02 05:45:49.775719 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.775731 | orchestrator | 2026-02-02 05:45:49.775743 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 05:45:49.775756 | orchestrator | Monday 02 February 2026 05:45:19 +0000 (0:00:00.785) 0:11:47.491 ******* 2026-02-02 05:45:49.775767 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:49.775777 | orchestrator | 2026-02-02 05:45:49.775788 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 05:45:49.775799 | orchestrator | Monday 02 February 2026 05:45:20 +0000 (0:00:00.808) 0:11:48.300 ******* 2026-02-02 05:45:49.775810 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.775820 | orchestrator | 2026-02-02 05:45:49.775831 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 05:45:49.775842 | orchestrator | Monday 02 February 2026 05:45:21 +0000 (0:00:00.777) 0:11:49.077 ******* 2026-02-02 05:45:49.775853 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.775863 | orchestrator | 2026-02-02 05:45:49.775874 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 05:45:49.775885 | orchestrator | Monday 02 February 2026 05:45:22 +0000 (0:00:00.800) 0:11:49.878 ******* 2026-02-02 05:45:49.775896 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.775906 | orchestrator | 2026-02-02 05:45:49.775917 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 05:45:49.775928 | orchestrator | Monday 02 February 2026 05:45:23 +0000 (0:00:00.765) 0:11:50.644 ******* 2026-02-02 05:45:49.775939 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.775949 | orchestrator | 2026-02-02 05:45:49.775960 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 05:45:49.775970 | orchestrator | Monday 02 February 2026 05:45:23 +0000 (0:00:00.749) 0:11:51.393 ******* 2026-02-02 05:45:49.775981 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.775992 | orchestrator | 2026-02-02 05:45:49.776003 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 05:45:49.776013 | orchestrator | Monday 02 February 2026 05:45:24 +0000 (0:00:00.788) 0:11:52.182 ******* 2026-02-02 05:45:49.776024 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:49.776035 | orchestrator | 2026-02-02 05:45:49.776045 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 05:45:49.776056 | orchestrator | Monday 02 February 2026 05:45:25 +0000 (0:00:00.800) 0:11:52.982 ******* 2026-02-02 05:45:49.776067 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:49.776077 | orchestrator | 2026-02-02 05:45:49.776088 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 05:45:49.776108 | orchestrator | Monday 02 February 2026 05:45:26 +0000 (0:00:00.831) 0:11:53.813 ******* 2026-02-02 05:45:49.776119 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:49.776130 | orchestrator | 2026-02-02 05:45:49.776141 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 05:45:49.776152 | orchestrator | Monday 02 February 2026 05:45:27 +0000 (0:00:00.781) 0:11:54.595 ******* 2026-02-02 05:45:49.776162 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776173 | orchestrator | 2026-02-02 05:45:49.776184 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 05:45:49.776194 | orchestrator | Monday 02 February 2026 05:45:27 +0000 (0:00:00.746) 0:11:55.342 ******* 2026-02-02 05:45:49.776205 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776216 | orchestrator | 2026-02-02 05:45:49.776227 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 05:45:49.776253 | orchestrator | Monday 02 February 2026 05:45:28 +0000 (0:00:00.764) 0:11:56.106 ******* 2026-02-02 05:45:49.776264 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776275 | orchestrator | 2026-02-02 05:45:49.776333 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 05:45:49.776346 | orchestrator | Monday 02 February 2026 05:45:29 +0000 (0:00:00.843) 0:11:56.950 ******* 2026-02-02 05:45:49.776358 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776369 | orchestrator | 2026-02-02 05:45:49.776380 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 05:45:49.776390 | orchestrator | Monday 02 February 2026 05:45:30 +0000 (0:00:00.740) 0:11:57.691 ******* 2026-02-02 05:45:49.776401 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776412 | orchestrator | 2026-02-02 05:45:49.776423 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 05:45:49.776434 | orchestrator | Monday 02 February 2026 05:45:30 +0000 (0:00:00.755) 0:11:58.447 ******* 2026-02-02 05:45:49.776445 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776456 | orchestrator | 2026-02-02 05:45:49.776467 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 05:45:49.776478 | orchestrator | Monday 02 February 2026 05:45:31 +0000 (0:00:00.826) 0:11:59.273 ******* 2026-02-02 05:45:49.776488 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776499 | orchestrator | 2026-02-02 05:45:49.776510 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 05:45:49.776538 | orchestrator | Monday 02 February 2026 05:45:32 +0000 (0:00:00.757) 0:12:00.030 ******* 2026-02-02 05:45:49.776550 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776561 | orchestrator | 2026-02-02 05:45:49.776572 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 05:45:49.776582 | orchestrator | Monday 02 February 2026 05:45:33 +0000 (0:00:00.780) 0:12:00.811 ******* 2026-02-02 05:45:49.776593 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776604 | orchestrator | 2026-02-02 05:45:49.776615 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 05:45:49.776625 | orchestrator | Monday 02 February 2026 05:45:34 +0000 (0:00:00.787) 0:12:01.598 ******* 2026-02-02 05:45:49.776636 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776647 | orchestrator | 2026-02-02 05:45:49.776657 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 05:45:49.776668 | orchestrator | Monday 02 February 2026 05:45:34 +0000 (0:00:00.753) 0:12:02.352 ******* 2026-02-02 05:45:49.776679 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776690 | orchestrator | 2026-02-02 05:45:49.776701 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 05:45:49.776711 | orchestrator | Monday 02 February 2026 05:45:35 +0000 (0:00:00.752) 0:12:03.104 ******* 2026-02-02 05:45:49.776722 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776733 | orchestrator | 2026-02-02 05:45:49.776744 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 05:45:49.776762 | orchestrator | Monday 02 February 2026 05:45:36 +0000 (0:00:00.749) 0:12:03.854 ******* 2026-02-02 05:45:49.776773 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:49.776784 | orchestrator | 2026-02-02 05:45:49.776795 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 05:45:49.776806 | orchestrator | Monday 02 February 2026 05:45:37 +0000 (0:00:01.674) 0:12:05.529 ******* 2026-02-02 05:45:49.776816 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:49.776827 | orchestrator | 2026-02-02 05:45:49.776838 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 05:45:49.776849 | orchestrator | Monday 02 February 2026 05:45:40 +0000 (0:00:02.183) 0:12:07.713 ******* 2026-02-02 05:45:49.776859 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-02 05:45:49.776871 | orchestrator | 2026-02-02 05:45:49.776882 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 05:45:49.776893 | orchestrator | Monday 02 February 2026 05:45:41 +0000 (0:00:01.312) 0:12:09.025 ******* 2026-02-02 05:45:49.776904 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776914 | orchestrator | 2026-02-02 05:45:49.776925 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 05:45:49.776936 | orchestrator | Monday 02 February 2026 05:45:42 +0000 (0:00:01.118) 0:12:10.144 ******* 2026-02-02 05:45:49.776947 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.776957 | orchestrator | 2026-02-02 05:45:49.776968 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 05:45:49.776979 | orchestrator | Monday 02 February 2026 05:45:43 +0000 (0:00:01.137) 0:12:11.282 ******* 2026-02-02 05:45:49.776990 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 05:45:49.777001 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 05:45:49.777011 | orchestrator | 2026-02-02 05:45:49.777022 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 05:45:49.777038 | orchestrator | Monday 02 February 2026 05:45:45 +0000 (0:00:01.896) 0:12:13.179 ******* 2026-02-02 05:45:49.777049 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:45:49.777060 | orchestrator | 2026-02-02 05:45:49.777071 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 05:45:49.777081 | orchestrator | Monday 02 February 2026 05:45:47 +0000 (0:00:01.480) 0:12:14.659 ******* 2026-02-02 05:45:49.777092 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.777102 | orchestrator | 2026-02-02 05:45:49.777113 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 05:45:49.777124 | orchestrator | Monday 02 February 2026 05:45:48 +0000 (0:00:01.137) 0:12:15.797 ******* 2026-02-02 05:45:49.777135 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:45:49.777145 | orchestrator | 2026-02-02 05:45:49.777156 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 05:45:49.777167 | orchestrator | Monday 02 February 2026 05:45:48 +0000 (0:00:00.775) 0:12:16.573 ******* 2026-02-02 05:45:49.777185 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.678849 | orchestrator | 2026-02-02 05:46:29.678967 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 05:46:29.678978 | orchestrator | Monday 02 February 2026 05:45:49 +0000 (0:00:00.770) 0:12:17.344 ******* 2026-02-02 05:46:29.678986 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-02 05:46:29.679002 | orchestrator | 2026-02-02 05:46:29.679009 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 05:46:29.679015 | orchestrator | Monday 02 February 2026 05:45:50 +0000 (0:00:01.161) 0:12:18.506 ******* 2026-02-02 05:46:29.679022 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:46:29.679029 | orchestrator | 2026-02-02 05:46:29.679036 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 05:46:29.679061 | orchestrator | Monday 02 February 2026 05:45:52 +0000 (0:00:01.671) 0:12:20.178 ******* 2026-02-02 05:46:29.679067 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 05:46:29.679074 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 05:46:29.679080 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 05:46:29.679086 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679093 | orchestrator | 2026-02-02 05:46:29.679100 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 05:46:29.679106 | orchestrator | Monday 02 February 2026 05:45:53 +0000 (0:00:01.133) 0:12:21.312 ******* 2026-02-02 05:46:29.679112 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679118 | orchestrator | 2026-02-02 05:46:29.679124 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 05:46:29.679130 | orchestrator | Monday 02 February 2026 05:45:54 +0000 (0:00:01.215) 0:12:22.527 ******* 2026-02-02 05:46:29.679136 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679142 | orchestrator | 2026-02-02 05:46:29.679148 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 05:46:29.679155 | orchestrator | Monday 02 February 2026 05:45:56 +0000 (0:00:01.184) 0:12:23.712 ******* 2026-02-02 05:46:29.679161 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679167 | orchestrator | 2026-02-02 05:46:29.679174 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 05:46:29.679184 | orchestrator | Monday 02 February 2026 05:45:57 +0000 (0:00:01.131) 0:12:24.843 ******* 2026-02-02 05:46:29.679194 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679203 | orchestrator | 2026-02-02 05:46:29.679213 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 05:46:29.679223 | orchestrator | Monday 02 February 2026 05:45:58 +0000 (0:00:01.131) 0:12:25.975 ******* 2026-02-02 05:46:29.679232 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679242 | orchestrator | 2026-02-02 05:46:29.679252 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 05:46:29.679262 | orchestrator | Monday 02 February 2026 05:45:59 +0000 (0:00:00.771) 0:12:26.747 ******* 2026-02-02 05:46:29.679273 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:46:29.679282 | orchestrator | 2026-02-02 05:46:29.679292 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 05:46:29.679301 | orchestrator | Monday 02 February 2026 05:46:01 +0000 (0:00:02.239) 0:12:28.986 ******* 2026-02-02 05:46:29.679311 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:46:29.679319 | orchestrator | 2026-02-02 05:46:29.679329 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 05:46:29.679338 | orchestrator | Monday 02 February 2026 05:46:02 +0000 (0:00:00.757) 0:12:29.744 ******* 2026-02-02 05:46:29.679347 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-02 05:46:29.679356 | orchestrator | 2026-02-02 05:46:29.679365 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 05:46:29.679375 | orchestrator | Monday 02 February 2026 05:46:03 +0000 (0:00:01.128) 0:12:30.873 ******* 2026-02-02 05:46:29.679385 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679396 | orchestrator | 2026-02-02 05:46:29.679406 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 05:46:29.679417 | orchestrator | Monday 02 February 2026 05:46:04 +0000 (0:00:01.132) 0:12:32.005 ******* 2026-02-02 05:46:29.679427 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679436 | orchestrator | 2026-02-02 05:46:29.679446 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 05:46:29.679455 | orchestrator | Monday 02 February 2026 05:46:05 +0000 (0:00:01.161) 0:12:33.167 ******* 2026-02-02 05:46:29.679466 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679475 | orchestrator | 2026-02-02 05:46:29.679495 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 05:46:29.679505 | orchestrator | Monday 02 February 2026 05:46:06 +0000 (0:00:01.139) 0:12:34.306 ******* 2026-02-02 05:46:29.679515 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679525 | orchestrator | 2026-02-02 05:46:29.679569 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 05:46:29.679580 | orchestrator | Monday 02 February 2026 05:46:07 +0000 (0:00:01.121) 0:12:35.427 ******* 2026-02-02 05:46:29.679590 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679601 | orchestrator | 2026-02-02 05:46:29.679612 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 05:46:29.679622 | orchestrator | Monday 02 February 2026 05:46:08 +0000 (0:00:01.150) 0:12:36.578 ******* 2026-02-02 05:46:29.679632 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679641 | orchestrator | 2026-02-02 05:46:29.679651 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 05:46:29.679662 | orchestrator | Monday 02 February 2026 05:46:10 +0000 (0:00:01.196) 0:12:37.775 ******* 2026-02-02 05:46:29.679674 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679686 | orchestrator | 2026-02-02 05:46:29.679698 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 05:46:29.679731 | orchestrator | Monday 02 February 2026 05:46:11 +0000 (0:00:01.173) 0:12:38.948 ******* 2026-02-02 05:46:29.679743 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.679754 | orchestrator | 2026-02-02 05:46:29.679766 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 05:46:29.679777 | orchestrator | Monday 02 February 2026 05:46:12 +0000 (0:00:01.181) 0:12:40.130 ******* 2026-02-02 05:46:29.679788 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:46:29.679800 | orchestrator | 2026-02-02 05:46:29.679810 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 05:46:29.679820 | orchestrator | Monday 02 February 2026 05:46:13 +0000 (0:00:00.782) 0:12:40.912 ******* 2026-02-02 05:46:29.679831 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-02 05:46:29.679841 | orchestrator | 2026-02-02 05:46:29.679852 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 05:46:29.679863 | orchestrator | Monday 02 February 2026 05:46:14 +0000 (0:00:01.096) 0:12:42.008 ******* 2026-02-02 05:46:29.679874 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-02 05:46:29.679884 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-02 05:46:29.679895 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-02 05:46:29.679906 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-02 05:46:29.679917 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-02 05:46:29.679928 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-02 05:46:29.679940 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-02 05:46:29.679951 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-02 05:46:29.679962 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 05:46:29.679972 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 05:46:29.679982 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 05:46:29.679993 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 05:46:29.680004 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 05:46:29.680014 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 05:46:29.680025 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-02 05:46:29.680034 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-02 05:46:29.680045 | orchestrator | 2026-02-02 05:46:29.680056 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 05:46:29.680076 | orchestrator | Monday 02 February 2026 05:46:20 +0000 (0:00:06.491) 0:12:48.500 ******* 2026-02-02 05:46:29.680088 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.680100 | orchestrator | 2026-02-02 05:46:29.680111 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 05:46:29.680123 | orchestrator | Monday 02 February 2026 05:46:21 +0000 (0:00:00.782) 0:12:49.283 ******* 2026-02-02 05:46:29.680134 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.680146 | orchestrator | 2026-02-02 05:46:29.680158 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 05:46:29.680169 | orchestrator | Monday 02 February 2026 05:46:22 +0000 (0:00:00.791) 0:12:50.075 ******* 2026-02-02 05:46:29.680180 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.680192 | orchestrator | 2026-02-02 05:46:29.680203 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 05:46:29.680215 | orchestrator | Monday 02 February 2026 05:46:23 +0000 (0:00:00.778) 0:12:50.853 ******* 2026-02-02 05:46:29.680225 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.680235 | orchestrator | 2026-02-02 05:46:29.680245 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 05:46:29.680257 | orchestrator | Monday 02 February 2026 05:46:24 +0000 (0:00:00.844) 0:12:51.698 ******* 2026-02-02 05:46:29.680268 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.680280 | orchestrator | 2026-02-02 05:46:29.680290 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 05:46:29.680302 | orchestrator | Monday 02 February 2026 05:46:24 +0000 (0:00:00.779) 0:12:52.477 ******* 2026-02-02 05:46:29.680327 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.680337 | orchestrator | 2026-02-02 05:46:29.680347 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 05:46:29.680368 | orchestrator | Monday 02 February 2026 05:46:25 +0000 (0:00:00.851) 0:12:53.329 ******* 2026-02-02 05:46:29.680379 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.680390 | orchestrator | 2026-02-02 05:46:29.680399 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 05:46:29.680410 | orchestrator | Monday 02 February 2026 05:46:26 +0000 (0:00:00.788) 0:12:54.117 ******* 2026-02-02 05:46:29.680419 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.680428 | orchestrator | 2026-02-02 05:46:29.680445 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 05:46:29.680456 | orchestrator | Monday 02 February 2026 05:46:27 +0000 (0:00:00.768) 0:12:54.886 ******* 2026-02-02 05:46:29.680465 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.680474 | orchestrator | 2026-02-02 05:46:29.680484 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 05:46:29.680494 | orchestrator | Monday 02 February 2026 05:46:28 +0000 (0:00:00.792) 0:12:55.679 ******* 2026-02-02 05:46:29.680504 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.680514 | orchestrator | 2026-02-02 05:46:29.680525 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 05:46:29.680565 | orchestrator | Monday 02 February 2026 05:46:28 +0000 (0:00:00.769) 0:12:56.449 ******* 2026-02-02 05:46:29.680577 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:46:29.680586 | orchestrator | 2026-02-02 05:46:29.680608 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 05:47:16.995875 | orchestrator | Monday 02 February 2026 05:46:29 +0000 (0:00:00.799) 0:12:57.248 ******* 2026-02-02 05:47:16.995972 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.995980 | orchestrator | 2026-02-02 05:47:16.995985 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 05:47:16.995989 | orchestrator | Monday 02 February 2026 05:46:30 +0000 (0:00:00.798) 0:12:58.047 ******* 2026-02-02 05:47:16.995993 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996011 | orchestrator | 2026-02-02 05:47:16.996015 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 05:47:16.996019 | orchestrator | Monday 02 February 2026 05:46:31 +0000 (0:00:00.890) 0:12:58.937 ******* 2026-02-02 05:47:16.996023 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996026 | orchestrator | 2026-02-02 05:47:16.996030 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 05:47:16.996035 | orchestrator | Monday 02 February 2026 05:46:32 +0000 (0:00:00.784) 0:12:59.721 ******* 2026-02-02 05:47:16.996043 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996047 | orchestrator | 2026-02-02 05:47:16.996051 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 05:47:16.996055 | orchestrator | Monday 02 February 2026 05:46:32 +0000 (0:00:00.860) 0:13:00.581 ******* 2026-02-02 05:47:16.996058 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996062 | orchestrator | 2026-02-02 05:47:16.996066 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 05:47:16.996069 | orchestrator | Monday 02 February 2026 05:46:33 +0000 (0:00:00.796) 0:13:01.378 ******* 2026-02-02 05:47:16.996073 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996077 | orchestrator | 2026-02-02 05:47:16.996081 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 05:47:16.996086 | orchestrator | Monday 02 February 2026 05:46:34 +0000 (0:00:00.745) 0:13:02.124 ******* 2026-02-02 05:47:16.996090 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996093 | orchestrator | 2026-02-02 05:47:16.996097 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 05:47:16.996101 | orchestrator | Monday 02 February 2026 05:46:35 +0000 (0:00:00.779) 0:13:02.903 ******* 2026-02-02 05:47:16.996105 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996108 | orchestrator | 2026-02-02 05:47:16.996112 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 05:47:16.996116 | orchestrator | Monday 02 February 2026 05:46:36 +0000 (0:00:00.800) 0:13:03.703 ******* 2026-02-02 05:47:16.996119 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996123 | orchestrator | 2026-02-02 05:47:16.996127 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 05:47:16.996130 | orchestrator | Monday 02 February 2026 05:46:36 +0000 (0:00:00.778) 0:13:04.482 ******* 2026-02-02 05:47:16.996134 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996138 | orchestrator | 2026-02-02 05:47:16.996142 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 05:47:16.996145 | orchestrator | Monday 02 February 2026 05:46:37 +0000 (0:00:00.769) 0:13:05.252 ******* 2026-02-02 05:47:16.996149 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-02 05:47:16.996153 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-02 05:47:16.996156 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-02 05:47:16.996160 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996164 | orchestrator | 2026-02-02 05:47:16.996168 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 05:47:16.996172 | orchestrator | Monday 02 February 2026 05:46:38 +0000 (0:00:01.076) 0:13:06.328 ******* 2026-02-02 05:47:16.996176 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-02 05:47:16.996179 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-02 05:47:16.996183 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-02 05:47:16.996187 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996190 | orchestrator | 2026-02-02 05:47:16.996194 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 05:47:16.996198 | orchestrator | Monday 02 February 2026 05:46:39 +0000 (0:00:01.068) 0:13:07.396 ******* 2026-02-02 05:47:16.996201 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-02 05:47:16.996209 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-02 05:47:16.996212 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-02 05:47:16.996216 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996220 | orchestrator | 2026-02-02 05:47:16.996224 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 05:47:16.996227 | orchestrator | Monday 02 February 2026 05:46:40 +0000 (0:00:01.064) 0:13:08.461 ******* 2026-02-02 05:47:16.996231 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996235 | orchestrator | 2026-02-02 05:47:16.996248 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 05:47:16.996252 | orchestrator | Monday 02 February 2026 05:46:41 +0000 (0:00:00.788) 0:13:09.249 ******* 2026-02-02 05:47:16.996256 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-02 05:47:16.996260 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996263 | orchestrator | 2026-02-02 05:47:16.996267 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 05:47:16.996271 | orchestrator | Monday 02 February 2026 05:46:42 +0000 (0:00:00.901) 0:13:10.151 ******* 2026-02-02 05:47:16.996275 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:47:16.996278 | orchestrator | 2026-02-02 05:47:16.996282 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-02 05:47:16.996286 | orchestrator | Monday 02 February 2026 05:46:44 +0000 (0:00:01.472) 0:13:11.624 ******* 2026-02-02 05:47:16.996289 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:47:16.996293 | orchestrator | 2026-02-02 05:47:16.996297 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-02 05:47:16.996311 | orchestrator | Monday 02 February 2026 05:46:44 +0000 (0:00:00.852) 0:13:12.476 ******* 2026-02-02 05:47:16.996315 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-02-02 05:47:16.996319 | orchestrator | 2026-02-02 05:47:16.996323 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-02 05:47:16.996327 | orchestrator | Monday 02 February 2026 05:46:46 +0000 (0:00:01.294) 0:13:13.771 ******* 2026-02-02 05:47:16.996330 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-02 05:47:16.996334 | orchestrator | 2026-02-02 05:47:16.996338 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-02 05:47:16.996342 | orchestrator | Monday 02 February 2026 05:46:49 +0000 (0:00:03.155) 0:13:16.927 ******* 2026-02-02 05:47:16.996345 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996349 | orchestrator | 2026-02-02 05:47:16.996353 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-02 05:47:16.996356 | orchestrator | Monday 02 February 2026 05:46:50 +0000 (0:00:01.202) 0:13:18.130 ******* 2026-02-02 05:47:16.996360 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:47:16.996364 | orchestrator | 2026-02-02 05:47:16.996367 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-02 05:47:16.996371 | orchestrator | Monday 02 February 2026 05:46:51 +0000 (0:00:01.122) 0:13:19.253 ******* 2026-02-02 05:47:16.996375 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:47:16.996379 | orchestrator | 2026-02-02 05:47:16.996382 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-02 05:47:16.996386 | orchestrator | Monday 02 February 2026 05:46:52 +0000 (0:00:01.206) 0:13:20.459 ******* 2026-02-02 05:47:16.996390 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:47:16.996393 | orchestrator | 2026-02-02 05:47:16.996397 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-02 05:47:16.996401 | orchestrator | Monday 02 February 2026 05:46:54 +0000 (0:00:02.057) 0:13:22.516 ******* 2026-02-02 05:47:16.996405 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:47:16.996408 | orchestrator | 2026-02-02 05:47:16.996412 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-02 05:47:16.996416 | orchestrator | Monday 02 February 2026 05:46:56 +0000 (0:00:01.651) 0:13:24.168 ******* 2026-02-02 05:47:16.996422 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:47:16.996426 | orchestrator | 2026-02-02 05:47:16.996430 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-02 05:47:16.996434 | orchestrator | Monday 02 February 2026 05:46:58 +0000 (0:00:01.643) 0:13:25.811 ******* 2026-02-02 05:47:16.996437 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:47:16.996441 | orchestrator | 2026-02-02 05:47:16.996445 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-02 05:47:16.996448 | orchestrator | Monday 02 February 2026 05:46:59 +0000 (0:00:01.526) 0:13:27.338 ******* 2026-02-02 05:47:16.996452 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-02 05:47:16.996456 | orchestrator | 2026-02-02 05:47:16.996460 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-02 05:47:16.996463 | orchestrator | Monday 02 February 2026 05:47:01 +0000 (0:00:01.620) 0:13:28.958 ******* 2026-02-02 05:47:16.996467 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-02 05:47:16.996471 | orchestrator | 2026-02-02 05:47:16.996474 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-02 05:47:16.996478 | orchestrator | Monday 02 February 2026 05:47:02 +0000 (0:00:01.609) 0:13:30.568 ******* 2026-02-02 05:47:16.996482 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 05:47:16.996486 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-02 05:47:16.996489 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 05:47:16.996493 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-02 05:47:16.996497 | orchestrator | 2026-02-02 05:47:16.996501 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-02 05:47:16.996505 | orchestrator | Monday 02 February 2026 05:47:07 +0000 (0:00:04.262) 0:13:34.830 ******* 2026-02-02 05:47:16.996508 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:47:16.996512 | orchestrator | 2026-02-02 05:47:16.996549 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-02 05:47:16.996553 | orchestrator | Monday 02 February 2026 05:47:09 +0000 (0:00:02.045) 0:13:36.876 ******* 2026-02-02 05:47:16.996557 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:47:16.996561 | orchestrator | 2026-02-02 05:47:16.996564 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-02 05:47:16.996568 | orchestrator | Monday 02 February 2026 05:47:10 +0000 (0:00:01.190) 0:13:38.066 ******* 2026-02-02 05:47:16.996572 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:47:16.996575 | orchestrator | 2026-02-02 05:47:16.996579 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-02 05:47:16.996586 | orchestrator | Monday 02 February 2026 05:47:11 +0000 (0:00:01.234) 0:13:39.301 ******* 2026-02-02 05:47:16.996590 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:47:16.996594 | orchestrator | 2026-02-02 05:47:16.996597 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-02 05:47:16.996601 | orchestrator | Monday 02 February 2026 05:47:13 +0000 (0:00:01.755) 0:13:41.056 ******* 2026-02-02 05:47:16.996605 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:47:16.996608 | orchestrator | 2026-02-02 05:47:16.996612 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-02 05:47:16.996616 | orchestrator | Monday 02 February 2026 05:47:15 +0000 (0:00:01.562) 0:13:42.619 ******* 2026-02-02 05:47:16.996620 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:47:16.996623 | orchestrator | 2026-02-02 05:47:16.996627 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-02 05:47:16.996631 | orchestrator | Monday 02 February 2026 05:47:15 +0000 (0:00:00.765) 0:13:43.385 ******* 2026-02-02 05:47:16.996635 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-02-02 05:47:16.996638 | orchestrator | 2026-02-02 05:47:16.996645 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-02 05:48:23.950418 | orchestrator | Monday 02 February 2026 05:47:16 +0000 (0:00:01.177) 0:13:44.562 ******* 2026-02-02 05:48:23.950536 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:48:23.950554 | orchestrator | 2026-02-02 05:48:23.950568 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-02 05:48:23.950579 | orchestrator | Monday 02 February 2026 05:47:18 +0000 (0:00:01.113) 0:13:45.676 ******* 2026-02-02 05:48:23.950590 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:48:23.950601 | orchestrator | 2026-02-02 05:48:23.950612 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-02 05:48:23.950623 | orchestrator | Monday 02 February 2026 05:47:19 +0000 (0:00:01.157) 0:13:46.834 ******* 2026-02-02 05:48:23.950633 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-02-02 05:48:23.950644 | orchestrator | 2026-02-02 05:48:23.950655 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-02 05:48:23.950665 | orchestrator | Monday 02 February 2026 05:47:20 +0000 (0:00:01.116) 0:13:47.950 ******* 2026-02-02 05:48:23.950676 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:48:23.950687 | orchestrator | 2026-02-02 05:48:23.950698 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-02 05:48:23.950709 | orchestrator | Monday 02 February 2026 05:47:23 +0000 (0:00:02.683) 0:13:50.634 ******* 2026-02-02 05:48:23.950720 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:48:23.950730 | orchestrator | 2026-02-02 05:48:23.950741 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-02 05:48:23.950752 | orchestrator | Monday 02 February 2026 05:47:25 +0000 (0:00:02.002) 0:13:52.636 ******* 2026-02-02 05:48:23.950763 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:48:23.950773 | orchestrator | 2026-02-02 05:48:23.950785 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-02 05:48:23.950796 | orchestrator | Monday 02 February 2026 05:47:27 +0000 (0:00:02.604) 0:13:55.241 ******* 2026-02-02 05:48:23.950806 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:48:23.950817 | orchestrator | 2026-02-02 05:48:23.950828 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-02 05:48:23.950839 | orchestrator | Monday 02 February 2026 05:47:30 +0000 (0:00:03.033) 0:13:58.274 ******* 2026-02-02 05:48:23.950849 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-02-02 05:48:23.950861 | orchestrator | 2026-02-02 05:48:23.950872 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-02 05:48:23.950882 | orchestrator | Monday 02 February 2026 05:47:31 +0000 (0:00:01.153) 0:13:59.427 ******* 2026-02-02 05:48:23.950893 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-02 05:48:23.950904 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:48:23.950917 | orchestrator | 2026-02-02 05:48:23.950930 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-02 05:48:23.950943 | orchestrator | Monday 02 February 2026 05:47:54 +0000 (0:00:22.967) 0:14:22.395 ******* 2026-02-02 05:48:23.950965 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:48:23.950992 | orchestrator | 2026-02-02 05:48:23.951015 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-02 05:48:23.951036 | orchestrator | Monday 02 February 2026 05:47:57 +0000 (0:00:02.833) 0:14:25.229 ******* 2026-02-02 05:48:23.951056 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:48:23.951075 | orchestrator | 2026-02-02 05:48:23.951095 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-02 05:48:23.951116 | orchestrator | Monday 02 February 2026 05:47:58 +0000 (0:00:00.792) 0:14:26.021 ******* 2026-02-02 05:48:23.951141 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-02 05:48:23.951189 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-02 05:48:23.951219 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-02 05:48:23.951232 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-02 05:48:23.951264 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-02 05:48:23.951277 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}])  2026-02-02 05:48:23.951290 | orchestrator | 2026-02-02 05:48:23.951301 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-02 05:48:23.951312 | orchestrator | Monday 02 February 2026 05:48:08 +0000 (0:00:09.632) 0:14:35.653 ******* 2026-02-02 05:48:23.951323 | orchestrator | changed: [testbed-node-1] 2026-02-02 05:48:23.951334 | orchestrator | 2026-02-02 05:48:23.951371 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 05:48:23.951382 | orchestrator | Monday 02 February 2026 05:48:10 +0000 (0:00:02.124) 0:14:37.778 ******* 2026-02-02 05:48:23.951392 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 05:48:23.951403 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-02 05:48:23.951414 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-02 05:48:23.951424 | orchestrator | 2026-02-02 05:48:23.951435 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 05:48:23.951446 | orchestrator | Monday 02 February 2026 05:48:12 +0000 (0:00:01.938) 0:14:39.717 ******* 2026-02-02 05:48:23.951456 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 05:48:23.951467 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 05:48:23.951478 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 05:48:23.951489 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:48:23.951499 | orchestrator | 2026-02-02 05:48:23.951510 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-02 05:48:23.951521 | orchestrator | Monday 02 February 2026 05:48:13 +0000 (0:00:01.053) 0:14:40.771 ******* 2026-02-02 05:48:23.951531 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:48:23.951542 | orchestrator | 2026-02-02 05:48:23.951562 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-02 05:48:23.951573 | orchestrator | Monday 02 February 2026 05:48:13 +0000 (0:00:00.763) 0:14:41.535 ******* 2026-02-02 05:48:23.951583 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:48:23.951594 | orchestrator | 2026-02-02 05:48:23.951605 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 05:48:23.951616 | orchestrator | Monday 02 February 2026 05:48:16 +0000 (0:00:02.418) 0:14:43.953 ******* 2026-02-02 05:48:23.951626 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:48:23.951637 | orchestrator | 2026-02-02 05:48:23.951648 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-02 05:48:23.951658 | orchestrator | Monday 02 February 2026 05:48:17 +0000 (0:00:00.767) 0:14:44.720 ******* 2026-02-02 05:48:23.951669 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:48:23.951680 | orchestrator | 2026-02-02 05:48:23.951690 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-02 05:48:23.951701 | orchestrator | Monday 02 February 2026 05:48:17 +0000 (0:00:00.807) 0:14:45.527 ******* 2026-02-02 05:48:23.951712 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:48:23.951722 | orchestrator | 2026-02-02 05:48:23.951733 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-02 05:48:23.951744 | orchestrator | Monday 02 February 2026 05:48:18 +0000 (0:00:00.764) 0:14:46.292 ******* 2026-02-02 05:48:23.951754 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:48:23.951765 | orchestrator | 2026-02-02 05:48:23.951776 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-02 05:48:23.951786 | orchestrator | Monday 02 February 2026 05:48:19 +0000 (0:00:00.773) 0:14:47.066 ******* 2026-02-02 05:48:23.951797 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:48:23.951808 | orchestrator | 2026-02-02 05:48:23.951819 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-02 05:48:23.951829 | orchestrator | Monday 02 February 2026 05:48:20 +0000 (0:00:00.761) 0:14:47.828 ******* 2026-02-02 05:48:23.951840 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:48:23.951851 | orchestrator | 2026-02-02 05:48:23.951867 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-02 05:48:23.951878 | orchestrator | Monday 02 February 2026 05:48:21 +0000 (0:00:00.784) 0:14:48.613 ******* 2026-02-02 05:48:23.951889 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:48:23.951899 | orchestrator | 2026-02-02 05:48:23.951910 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-02 05:48:23.951921 | orchestrator | 2026-02-02 05:48:23.951932 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-02 05:48:23.951942 | orchestrator | Monday 02 February 2026 05:48:21 +0000 (0:00:00.947) 0:14:49.560 ******* 2026-02-02 05:48:23.951953 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:48:23.951964 | orchestrator | 2026-02-02 05:48:23.951974 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-02 05:48:23.951985 | orchestrator | Monday 02 February 2026 05:48:23 +0000 (0:00:01.126) 0:14:50.687 ******* 2026-02-02 05:48:23.951996 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:48:23.952006 | orchestrator | 2026-02-02 05:48:23.952017 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-02 05:48:23.952034 | orchestrator | Monday 02 February 2026 05:48:23 +0000 (0:00:00.830) 0:14:51.517 ******* 2026-02-02 05:48:48.513140 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:48:48.513255 | orchestrator | 2026-02-02 05:48:48.513341 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-02 05:48:48.513367 | orchestrator | Monday 02 February 2026 05:48:24 +0000 (0:00:00.773) 0:14:52.291 ******* 2026-02-02 05:48:48.513387 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:48:48.513404 | orchestrator | 2026-02-02 05:48:48.513420 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 05:48:48.513430 | orchestrator | Monday 02 February 2026 05:48:25 +0000 (0:00:00.819) 0:14:53.111 ******* 2026-02-02 05:48:48.513462 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-02 05:48:48.513473 | orchestrator | 2026-02-02 05:48:48.513483 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 05:48:48.513492 | orchestrator | Monday 02 February 2026 05:48:26 +0000 (0:00:01.086) 0:14:54.197 ******* 2026-02-02 05:48:48.513502 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:48:48.513511 | orchestrator | 2026-02-02 05:48:48.513521 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 05:48:48.513531 | orchestrator | Monday 02 February 2026 05:48:28 +0000 (0:00:01.468) 0:14:55.666 ******* 2026-02-02 05:48:48.513540 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:48:48.513550 | orchestrator | 2026-02-02 05:48:48.513559 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 05:48:48.513569 | orchestrator | Monday 02 February 2026 05:48:29 +0000 (0:00:01.115) 0:14:56.781 ******* 2026-02-02 05:48:48.513578 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:48:48.513589 | orchestrator | 2026-02-02 05:48:48.513599 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 05:48:48.513608 | orchestrator | Monday 02 February 2026 05:48:30 +0000 (0:00:01.484) 0:14:58.266 ******* 2026-02-02 05:48:48.513618 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:48:48.513627 | orchestrator | 2026-02-02 05:48:48.513637 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 05:48:48.513646 | orchestrator | Monday 02 February 2026 05:48:31 +0000 (0:00:01.148) 0:14:59.414 ******* 2026-02-02 05:48:48.513656 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:48:48.513665 | orchestrator | 2026-02-02 05:48:48.513675 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 05:48:48.513685 | orchestrator | Monday 02 February 2026 05:48:32 +0000 (0:00:01.167) 0:15:00.582 ******* 2026-02-02 05:48:48.513696 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:48:48.513707 | orchestrator | 2026-02-02 05:48:48.513718 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 05:48:48.513730 | orchestrator | Monday 02 February 2026 05:48:34 +0000 (0:00:01.149) 0:15:01.731 ******* 2026-02-02 05:48:48.513741 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:48:48.513752 | orchestrator | 2026-02-02 05:48:48.513764 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 05:48:48.513775 | orchestrator | Monday 02 February 2026 05:48:35 +0000 (0:00:01.127) 0:15:02.859 ******* 2026-02-02 05:48:48.513787 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:48:48.513798 | orchestrator | 2026-02-02 05:48:48.513809 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 05:48:48.513820 | orchestrator | Monday 02 February 2026 05:48:36 +0000 (0:00:01.140) 0:15:03.999 ******* 2026-02-02 05:48:48.513832 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 05:48:48.513844 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:48:48.513855 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 05:48:48.513866 | orchestrator | 2026-02-02 05:48:48.513877 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 05:48:48.513888 | orchestrator | Monday 02 February 2026 05:48:38 +0000 (0:00:01.983) 0:15:05.983 ******* 2026-02-02 05:48:48.513965 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:48:48.513980 | orchestrator | 2026-02-02 05:48:48.513991 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 05:48:48.514003 | orchestrator | Monday 02 February 2026 05:48:39 +0000 (0:00:01.359) 0:15:07.343 ******* 2026-02-02 05:48:48.514114 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 05:48:48.514135 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:48:48.514152 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 05:48:48.514187 | orchestrator | 2026-02-02 05:48:48.514197 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 05:48:48.514207 | orchestrator | Monday 02 February 2026 05:48:42 +0000 (0:00:03.226) 0:15:10.569 ******* 2026-02-02 05:48:48.514229 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 05:48:48.514239 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 05:48:48.514248 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 05:48:48.514258 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:48:48.514267 | orchestrator | 2026-02-02 05:48:48.514277 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 05:48:48.514322 | orchestrator | Monday 02 February 2026 05:48:44 +0000 (0:00:01.421) 0:15:11.990 ******* 2026-02-02 05:48:48.514342 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 05:48:48.514363 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 05:48:48.514394 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 05:48:48.514405 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:48:48.514415 | orchestrator | 2026-02-02 05:48:48.514424 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 05:48:48.514434 | orchestrator | Monday 02 February 2026 05:48:46 +0000 (0:00:01.663) 0:15:13.654 ******* 2026-02-02 05:48:48.514446 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:48:48.514460 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:48:48.514470 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:48:48.514480 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:48:48.514489 | orchestrator | 2026-02-02 05:48:48.514499 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 05:48:48.514509 | orchestrator | Monday 02 February 2026 05:48:47 +0000 (0:00:01.207) 0:15:14.862 ******* 2026-02-02 05:48:48.514520 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 05:48:40.620239', 'end': '2026-02-02 05:48:40.671337', 'delta': '0:00:00.051098', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 05:48:48.514547 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'c530967d0aad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 05:48:41.203428', 'end': '2026-02-02 05:48:41.240670', 'delta': '0:00:00.037242', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c530967d0aad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 05:48:48.514564 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '39d29fabc2d2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 05:48:41.762518', 'end': '2026-02-02 05:48:41.814816', 'delta': '0:00:00.052298', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['39d29fabc2d2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 05:49:07.342868 | orchestrator | 2026-02-02 05:49:07.342984 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 05:49:07.343001 | orchestrator | Monday 02 February 2026 05:48:48 +0000 (0:00:01.217) 0:15:16.080 ******* 2026-02-02 05:49:07.343013 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:49:07.343026 | orchestrator | 2026-02-02 05:49:07.343037 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 05:49:07.343048 | orchestrator | Monday 02 February 2026 05:48:49 +0000 (0:00:01.269) 0:15:17.349 ******* 2026-02-02 05:49:07.343060 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:07.343072 | orchestrator | 2026-02-02 05:49:07.343083 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 05:49:07.343094 | orchestrator | Monday 02 February 2026 05:48:51 +0000 (0:00:01.309) 0:15:18.659 ******* 2026-02-02 05:49:07.343105 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:49:07.343116 | orchestrator | 2026-02-02 05:49:07.343127 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 05:49:07.343138 | orchestrator | Monday 02 February 2026 05:48:52 +0000 (0:00:01.185) 0:15:19.845 ******* 2026-02-02 05:49:07.343149 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-02 05:49:07.343160 | orchestrator | 2026-02-02 05:49:07.343172 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 05:49:07.343182 | orchestrator | Monday 02 February 2026 05:48:54 +0000 (0:00:02.066) 0:15:21.911 ******* 2026-02-02 05:49:07.343193 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:49:07.343204 | orchestrator | 2026-02-02 05:49:07.343216 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 05:49:07.343227 | orchestrator | Monday 02 February 2026 05:48:55 +0000 (0:00:01.227) 0:15:23.139 ******* 2026-02-02 05:49:07.343298 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:07.343311 | orchestrator | 2026-02-02 05:49:07.343322 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 05:49:07.343333 | orchestrator | Monday 02 February 2026 05:48:56 +0000 (0:00:01.130) 0:15:24.270 ******* 2026-02-02 05:49:07.343366 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:07.343377 | orchestrator | 2026-02-02 05:49:07.343388 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 05:49:07.343399 | orchestrator | Monday 02 February 2026 05:48:57 +0000 (0:00:01.248) 0:15:25.518 ******* 2026-02-02 05:49:07.343413 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:07.343425 | orchestrator | 2026-02-02 05:49:07.343438 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 05:49:07.343451 | orchestrator | Monday 02 February 2026 05:48:59 +0000 (0:00:01.138) 0:15:26.657 ******* 2026-02-02 05:49:07.343464 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:07.343476 | orchestrator | 2026-02-02 05:49:07.343489 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 05:49:07.343501 | orchestrator | Monday 02 February 2026 05:49:00 +0000 (0:00:01.214) 0:15:27.871 ******* 2026-02-02 05:49:07.343515 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:07.343533 | orchestrator | 2026-02-02 05:49:07.343553 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 05:49:07.343572 | orchestrator | Monday 02 February 2026 05:49:01 +0000 (0:00:01.146) 0:15:29.018 ******* 2026-02-02 05:49:07.343586 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:07.343599 | orchestrator | 2026-02-02 05:49:07.343612 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 05:49:07.343625 | orchestrator | Monday 02 February 2026 05:49:02 +0000 (0:00:01.200) 0:15:30.219 ******* 2026-02-02 05:49:07.343637 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:07.343650 | orchestrator | 2026-02-02 05:49:07.343663 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 05:49:07.343675 | orchestrator | Monday 02 February 2026 05:49:03 +0000 (0:00:01.177) 0:15:31.396 ******* 2026-02-02 05:49:07.343688 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:07.343700 | orchestrator | 2026-02-02 05:49:07.343713 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 05:49:07.343726 | orchestrator | Monday 02 February 2026 05:49:04 +0000 (0:00:01.123) 0:15:32.520 ******* 2026-02-02 05:49:07.343739 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:07.343752 | orchestrator | 2026-02-02 05:49:07.343764 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 05:49:07.343775 | orchestrator | Monday 02 February 2026 05:49:06 +0000 (0:00:01.127) 0:15:33.648 ******* 2026-02-02 05:49:07.343803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:49:07.343818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:49:07.343848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:49:07.343862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 05:49:07.343944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:49:07.343957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:49:07.343968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:49:07.343999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0dc97797', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:49:08.613946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:49:08.614142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:49:08.614163 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:08.614177 | orchestrator | 2026-02-02 05:49:08.614190 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 05:49:08.614202 | orchestrator | Monday 02 February 2026 05:49:07 +0000 (0:00:01.263) 0:15:34.911 ******* 2026-02-02 05:49:08.614217 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:49:08.614297 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:49:08.614310 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:49:08.614337 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:49:08.614369 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:49:08.614390 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:49:08.614402 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:49:08.614425 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0dc97797', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:49:08.614450 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:49:43.974783 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:49:43.974897 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.974915 | orchestrator | 2026-02-02 05:49:43.974928 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 05:49:43.974940 | orchestrator | Monday 02 February 2026 05:49:08 +0000 (0:00:01.269) 0:15:36.181 ******* 2026-02-02 05:49:43.974952 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:49:43.974963 | orchestrator | 2026-02-02 05:49:43.974974 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 05:49:43.974985 | orchestrator | Monday 02 February 2026 05:49:10 +0000 (0:00:01.495) 0:15:37.676 ******* 2026-02-02 05:49:43.974996 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:49:43.975007 | orchestrator | 2026-02-02 05:49:43.975018 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 05:49:43.975029 | orchestrator | Monday 02 February 2026 05:49:11 +0000 (0:00:01.201) 0:15:38.878 ******* 2026-02-02 05:49:43.975039 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:49:43.975050 | orchestrator | 2026-02-02 05:49:43.975061 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 05:49:43.975072 | orchestrator | Monday 02 February 2026 05:49:12 +0000 (0:00:01.473) 0:15:40.352 ******* 2026-02-02 05:49:43.975083 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.975094 | orchestrator | 2026-02-02 05:49:43.975105 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 05:49:43.975116 | orchestrator | Monday 02 February 2026 05:49:13 +0000 (0:00:01.137) 0:15:41.489 ******* 2026-02-02 05:49:43.975127 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.975137 | orchestrator | 2026-02-02 05:49:43.975149 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 05:49:43.975195 | orchestrator | Monday 02 February 2026 05:49:15 +0000 (0:00:01.295) 0:15:42.785 ******* 2026-02-02 05:49:43.975207 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.975218 | orchestrator | 2026-02-02 05:49:43.975229 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 05:49:43.975240 | orchestrator | Monday 02 February 2026 05:49:16 +0000 (0:00:01.221) 0:15:44.006 ******* 2026-02-02 05:49:43.975250 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-02 05:49:43.975261 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-02 05:49:43.975272 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 05:49:43.975282 | orchestrator | 2026-02-02 05:49:43.975306 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 05:49:43.975320 | orchestrator | Monday 02 February 2026 05:49:18 +0000 (0:00:01.680) 0:15:45.687 ******* 2026-02-02 05:49:43.975333 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 05:49:43.975346 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 05:49:43.975383 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 05:49:43.975397 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.975408 | orchestrator | 2026-02-02 05:49:43.975421 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 05:49:43.975433 | orchestrator | Monday 02 February 2026 05:49:19 +0000 (0:00:01.136) 0:15:46.824 ******* 2026-02-02 05:49:43.975446 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.975458 | orchestrator | 2026-02-02 05:49:43.975493 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 05:49:43.975513 | orchestrator | Monday 02 February 2026 05:49:20 +0000 (0:00:01.106) 0:15:47.930 ******* 2026-02-02 05:49:43.975530 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 05:49:43.975549 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:49:43.975566 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 05:49:43.975584 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 05:49:43.975604 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 05:49:43.975623 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 05:49:43.975642 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 05:49:43.975660 | orchestrator | 2026-02-02 05:49:43.975678 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 05:49:43.975698 | orchestrator | Monday 02 February 2026 05:49:22 +0000 (0:00:01.855) 0:15:49.786 ******* 2026-02-02 05:49:43.975715 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 05:49:43.975735 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:49:43.975753 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 05:49:43.975770 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 05:49:43.975800 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 05:49:43.975811 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 05:49:43.975822 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 05:49:43.975833 | orchestrator | 2026-02-02 05:49:43.975843 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-02 05:49:43.975854 | orchestrator | Monday 02 February 2026 05:49:24 +0000 (0:00:02.246) 0:15:52.033 ******* 2026-02-02 05:49:43.975865 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.975875 | orchestrator | 2026-02-02 05:49:43.975886 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-02 05:49:43.975897 | orchestrator | Monday 02 February 2026 05:49:25 +0000 (0:00:00.859) 0:15:52.892 ******* 2026-02-02 05:49:43.975907 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.975918 | orchestrator | 2026-02-02 05:49:43.975929 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-02 05:49:43.975939 | orchestrator | Monday 02 February 2026 05:49:26 +0000 (0:00:00.844) 0:15:53.736 ******* 2026-02-02 05:49:43.975950 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.975961 | orchestrator | 2026-02-02 05:49:43.975972 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-02 05:49:43.975983 | orchestrator | Monday 02 February 2026 05:49:26 +0000 (0:00:00.781) 0:15:54.518 ******* 2026-02-02 05:49:43.975993 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.976004 | orchestrator | 2026-02-02 05:49:43.976015 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-02 05:49:43.976025 | orchestrator | Monday 02 February 2026 05:49:27 +0000 (0:00:00.881) 0:15:55.400 ******* 2026-02-02 05:49:43.976048 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.976059 | orchestrator | 2026-02-02 05:49:43.976070 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-02 05:49:43.976080 | orchestrator | Monday 02 February 2026 05:49:28 +0000 (0:00:00.788) 0:15:56.189 ******* 2026-02-02 05:49:43.976091 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 05:49:43.976101 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 05:49:43.976112 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 05:49:43.976122 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.976133 | orchestrator | 2026-02-02 05:49:43.976144 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-02 05:49:43.976188 | orchestrator | Monday 02 February 2026 05:49:29 +0000 (0:00:01.392) 0:15:57.582 ******* 2026-02-02 05:49:43.976201 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-02 05:49:43.976212 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-02 05:49:43.976223 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-02 05:49:43.976234 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-02 05:49:43.976244 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-02 05:49:43.976255 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-02 05:49:43.976266 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.976277 | orchestrator | 2026-02-02 05:49:43.976288 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-02 05:49:43.976299 | orchestrator | Monday 02 February 2026 05:49:31 +0000 (0:00:01.734) 0:15:59.316 ******* 2026-02-02 05:49:43.976310 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 05:49:43.976321 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 05:49:43.976332 | orchestrator | 2026-02-02 05:49:43.976343 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-02 05:49:43.976354 | orchestrator | Monday 02 February 2026 05:49:35 +0000 (0:00:04.064) 0:16:03.381 ******* 2026-02-02 05:49:43.976365 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:49:43.976376 | orchestrator | 2026-02-02 05:49:43.976395 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 05:49:43.976408 | orchestrator | Monday 02 February 2026 05:49:37 +0000 (0:00:02.103) 0:16:05.485 ******* 2026-02-02 05:49:43.976427 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-02 05:49:43.976447 | orchestrator | 2026-02-02 05:49:43.976468 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 05:49:43.976487 | orchestrator | Monday 02 February 2026 05:49:39 +0000 (0:00:01.153) 0:16:06.639 ******* 2026-02-02 05:49:43.976507 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-02 05:49:43.976524 | orchestrator | 2026-02-02 05:49:43.976542 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 05:49:43.976560 | orchestrator | Monday 02 February 2026 05:49:40 +0000 (0:00:01.110) 0:16:07.750 ******* 2026-02-02 05:49:43.976580 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:49:43.976601 | orchestrator | 2026-02-02 05:49:43.976622 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 05:49:43.976642 | orchestrator | Monday 02 February 2026 05:49:41 +0000 (0:00:01.527) 0:16:09.277 ******* 2026-02-02 05:49:43.976657 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.976668 | orchestrator | 2026-02-02 05:49:43.976678 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 05:49:43.976689 | orchestrator | Monday 02 February 2026 05:49:42 +0000 (0:00:01.145) 0:16:10.423 ******* 2026-02-02 05:49:43.976709 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:49:43.976720 | orchestrator | 2026-02-02 05:49:43.976731 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 05:49:43.976751 | orchestrator | Monday 02 February 2026 05:49:43 +0000 (0:00:01.120) 0:16:11.543 ******* 2026-02-02 05:50:25.751862 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.751977 | orchestrator | 2026-02-02 05:50:25.751993 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 05:50:25.752006 | orchestrator | Monday 02 February 2026 05:49:45 +0000 (0:00:01.124) 0:16:12.668 ******* 2026-02-02 05:50:25.752017 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:50:25.752029 | orchestrator | 2026-02-02 05:50:25.752040 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 05:50:25.752051 | orchestrator | Monday 02 February 2026 05:49:46 +0000 (0:00:01.539) 0:16:14.208 ******* 2026-02-02 05:50:25.752061 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.752072 | orchestrator | 2026-02-02 05:50:25.752130 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 05:50:25.752152 | orchestrator | Monday 02 February 2026 05:49:47 +0000 (0:00:01.127) 0:16:15.336 ******* 2026-02-02 05:50:25.752171 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.752189 | orchestrator | 2026-02-02 05:50:25.752203 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 05:50:25.752214 | orchestrator | Monday 02 February 2026 05:49:48 +0000 (0:00:01.119) 0:16:16.456 ******* 2026-02-02 05:50:25.752225 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:50:25.752236 | orchestrator | 2026-02-02 05:50:25.752247 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 05:50:25.752258 | orchestrator | Monday 02 February 2026 05:49:50 +0000 (0:00:01.958) 0:16:18.414 ******* 2026-02-02 05:50:25.752268 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:50:25.752279 | orchestrator | 2026-02-02 05:50:25.752290 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 05:50:25.752301 | orchestrator | Monday 02 February 2026 05:49:52 +0000 (0:00:01.564) 0:16:19.979 ******* 2026-02-02 05:50:25.752312 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.752323 | orchestrator | 2026-02-02 05:50:25.752334 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 05:50:25.752345 | orchestrator | Monday 02 February 2026 05:49:53 +0000 (0:00:00.764) 0:16:20.743 ******* 2026-02-02 05:50:25.752356 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:50:25.752367 | orchestrator | 2026-02-02 05:50:25.752378 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 05:50:25.752389 | orchestrator | Monday 02 February 2026 05:49:53 +0000 (0:00:00.775) 0:16:21.518 ******* 2026-02-02 05:50:25.752402 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.752414 | orchestrator | 2026-02-02 05:50:25.752427 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 05:50:25.752440 | orchestrator | Monday 02 February 2026 05:49:54 +0000 (0:00:00.768) 0:16:22.287 ******* 2026-02-02 05:50:25.752452 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.752465 | orchestrator | 2026-02-02 05:50:25.752478 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 05:50:25.752490 | orchestrator | Monday 02 February 2026 05:49:55 +0000 (0:00:00.817) 0:16:23.105 ******* 2026-02-02 05:50:25.752503 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.752515 | orchestrator | 2026-02-02 05:50:25.752528 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 05:50:25.752540 | orchestrator | Monday 02 February 2026 05:49:56 +0000 (0:00:00.821) 0:16:23.927 ******* 2026-02-02 05:50:25.752553 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.752566 | orchestrator | 2026-02-02 05:50:25.752578 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 05:50:25.752588 | orchestrator | Monday 02 February 2026 05:49:57 +0000 (0:00:00.780) 0:16:24.708 ******* 2026-02-02 05:50:25.752624 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.752635 | orchestrator | 2026-02-02 05:50:25.752646 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 05:50:25.752656 | orchestrator | Monday 02 February 2026 05:49:57 +0000 (0:00:00.748) 0:16:25.456 ******* 2026-02-02 05:50:25.752667 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:50:25.752678 | orchestrator | 2026-02-02 05:50:25.752688 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 05:50:25.752699 | orchestrator | Monday 02 February 2026 05:49:58 +0000 (0:00:00.780) 0:16:26.237 ******* 2026-02-02 05:50:25.752710 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:50:25.752721 | orchestrator | 2026-02-02 05:50:25.752746 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 05:50:25.752758 | orchestrator | Monday 02 February 2026 05:49:59 +0000 (0:00:00.787) 0:16:27.024 ******* 2026-02-02 05:50:25.752768 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:50:25.752779 | orchestrator | 2026-02-02 05:50:25.752790 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 05:50:25.752801 | orchestrator | Monday 02 February 2026 05:50:00 +0000 (0:00:00.779) 0:16:27.804 ******* 2026-02-02 05:50:25.752811 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.752822 | orchestrator | 2026-02-02 05:50:25.752833 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 05:50:25.752843 | orchestrator | Monday 02 February 2026 05:50:01 +0000 (0:00:00.833) 0:16:28.638 ******* 2026-02-02 05:50:25.752854 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.752865 | orchestrator | 2026-02-02 05:50:25.752876 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 05:50:25.752886 | orchestrator | Monday 02 February 2026 05:50:01 +0000 (0:00:00.753) 0:16:29.391 ******* 2026-02-02 05:50:25.752897 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.752908 | orchestrator | 2026-02-02 05:50:25.752918 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 05:50:25.752929 | orchestrator | Monday 02 February 2026 05:50:02 +0000 (0:00:00.778) 0:16:30.169 ******* 2026-02-02 05:50:25.752940 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.752950 | orchestrator | 2026-02-02 05:50:25.752961 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 05:50:25.752972 | orchestrator | Monday 02 February 2026 05:50:03 +0000 (0:00:00.821) 0:16:30.991 ******* 2026-02-02 05:50:25.752983 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.752993 | orchestrator | 2026-02-02 05:50:25.753023 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 05:50:25.753034 | orchestrator | Monday 02 February 2026 05:50:04 +0000 (0:00:00.767) 0:16:31.758 ******* 2026-02-02 05:50:25.753045 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.753056 | orchestrator | 2026-02-02 05:50:25.753066 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 05:50:25.753077 | orchestrator | Monday 02 February 2026 05:50:04 +0000 (0:00:00.770) 0:16:32.529 ******* 2026-02-02 05:50:25.753130 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.753143 | orchestrator | 2026-02-02 05:50:25.753154 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 05:50:25.753165 | orchestrator | Monday 02 February 2026 05:50:05 +0000 (0:00:00.857) 0:16:33.386 ******* 2026-02-02 05:50:25.753176 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.753186 | orchestrator | 2026-02-02 05:50:25.753197 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 05:50:25.753219 | orchestrator | Monday 02 February 2026 05:50:06 +0000 (0:00:00.774) 0:16:34.161 ******* 2026-02-02 05:50:25.753230 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.753241 | orchestrator | 2026-02-02 05:50:25.753251 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 05:50:25.753262 | orchestrator | Monday 02 February 2026 05:50:07 +0000 (0:00:00.833) 0:16:34.995 ******* 2026-02-02 05:50:25.753281 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.753292 | orchestrator | 2026-02-02 05:50:25.753303 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 05:50:25.753313 | orchestrator | Monday 02 February 2026 05:50:08 +0000 (0:00:00.775) 0:16:35.771 ******* 2026-02-02 05:50:25.753324 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.753334 | orchestrator | 2026-02-02 05:50:25.753345 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 05:50:25.753356 | orchestrator | Monday 02 February 2026 05:50:08 +0000 (0:00:00.793) 0:16:36.565 ******* 2026-02-02 05:50:25.753366 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.753377 | orchestrator | 2026-02-02 05:50:25.753388 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 05:50:25.753398 | orchestrator | Monday 02 February 2026 05:50:09 +0000 (0:00:00.780) 0:16:37.345 ******* 2026-02-02 05:50:25.753409 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:50:25.753419 | orchestrator | 2026-02-02 05:50:25.753430 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 05:50:25.753441 | orchestrator | Monday 02 February 2026 05:50:11 +0000 (0:00:01.682) 0:16:39.028 ******* 2026-02-02 05:50:25.753451 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:50:25.753462 | orchestrator | 2026-02-02 05:50:25.753472 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 05:50:25.753483 | orchestrator | Monday 02 February 2026 05:50:13 +0000 (0:00:02.004) 0:16:41.032 ******* 2026-02-02 05:50:25.753493 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-02 05:50:25.753505 | orchestrator | 2026-02-02 05:50:25.753516 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 05:50:25.753526 | orchestrator | Monday 02 February 2026 05:50:14 +0000 (0:00:01.088) 0:16:42.121 ******* 2026-02-02 05:50:25.753537 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.753548 | orchestrator | 2026-02-02 05:50:25.753559 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 05:50:25.753569 | orchestrator | Monday 02 February 2026 05:50:15 +0000 (0:00:01.123) 0:16:43.244 ******* 2026-02-02 05:50:25.753580 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.753590 | orchestrator | 2026-02-02 05:50:25.753601 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 05:50:25.753611 | orchestrator | Monday 02 February 2026 05:50:16 +0000 (0:00:01.102) 0:16:44.347 ******* 2026-02-02 05:50:25.753622 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 05:50:25.753632 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 05:50:25.753643 | orchestrator | 2026-02-02 05:50:25.753653 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 05:50:25.753669 | orchestrator | Monday 02 February 2026 05:50:18 +0000 (0:00:01.850) 0:16:46.197 ******* 2026-02-02 05:50:25.753679 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:50:25.753690 | orchestrator | 2026-02-02 05:50:25.753701 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 05:50:25.753711 | orchestrator | Monday 02 February 2026 05:50:20 +0000 (0:00:01.569) 0:16:47.767 ******* 2026-02-02 05:50:25.753722 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.753733 | orchestrator | 2026-02-02 05:50:25.753743 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 05:50:25.753754 | orchestrator | Monday 02 February 2026 05:50:21 +0000 (0:00:01.142) 0:16:48.909 ******* 2026-02-02 05:50:25.753765 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.753775 | orchestrator | 2026-02-02 05:50:25.753786 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 05:50:25.753797 | orchestrator | Monday 02 February 2026 05:50:22 +0000 (0:00:00.795) 0:16:49.705 ******* 2026-02-02 05:50:25.753808 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:50:25.753825 | orchestrator | 2026-02-02 05:50:25.753836 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 05:50:25.753847 | orchestrator | Monday 02 February 2026 05:50:22 +0000 (0:00:00.757) 0:16:50.462 ******* 2026-02-02 05:50:25.753857 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-02 05:50:25.753868 | orchestrator | 2026-02-02 05:50:25.753878 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 05:50:25.753889 | orchestrator | Monday 02 February 2026 05:50:23 +0000 (0:00:01.090) 0:16:51.552 ******* 2026-02-02 05:50:25.753899 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:50:25.753910 | orchestrator | 2026-02-02 05:50:25.753921 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 05:50:25.753938 | orchestrator | Monday 02 February 2026 05:50:25 +0000 (0:00:01.770) 0:16:53.322 ******* 2026-02-02 05:51:05.109686 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 05:51:05.109829 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 05:51:05.109860 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 05:51:05.109881 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.109902 | orchestrator | 2026-02-02 05:51:05.109914 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 05:51:05.109926 | orchestrator | Monday 02 February 2026 05:50:26 +0000 (0:00:01.138) 0:16:54.461 ******* 2026-02-02 05:51:05.109937 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.109948 | orchestrator | 2026-02-02 05:51:05.109959 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 05:51:05.109970 | orchestrator | Monday 02 February 2026 05:50:27 +0000 (0:00:01.111) 0:16:55.572 ******* 2026-02-02 05:51:05.109981 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.109992 | orchestrator | 2026-02-02 05:51:05.110003 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 05:51:05.110014 | orchestrator | Monday 02 February 2026 05:50:29 +0000 (0:00:01.163) 0:16:56.736 ******* 2026-02-02 05:51:05.110166 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.110178 | orchestrator | 2026-02-02 05:51:05.110190 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 05:51:05.110204 | orchestrator | Monday 02 February 2026 05:50:30 +0000 (0:00:01.130) 0:16:57.866 ******* 2026-02-02 05:51:05.110217 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.110230 | orchestrator | 2026-02-02 05:51:05.110243 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 05:51:05.110256 | orchestrator | Monday 02 February 2026 05:50:31 +0000 (0:00:01.177) 0:16:59.044 ******* 2026-02-02 05:51:05.110269 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.110282 | orchestrator | 2026-02-02 05:51:05.110295 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 05:51:05.110308 | orchestrator | Monday 02 February 2026 05:50:32 +0000 (0:00:00.774) 0:16:59.819 ******* 2026-02-02 05:51:05.110321 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:05.110336 | orchestrator | 2026-02-02 05:51:05.110349 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 05:51:05.110362 | orchestrator | Monday 02 February 2026 05:50:34 +0000 (0:00:02.069) 0:17:01.888 ******* 2026-02-02 05:51:05.110375 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:05.110388 | orchestrator | 2026-02-02 05:51:05.110401 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 05:51:05.110415 | orchestrator | Monday 02 February 2026 05:50:35 +0000 (0:00:00.800) 0:17:02.689 ******* 2026-02-02 05:51:05.110429 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-02 05:51:05.110443 | orchestrator | 2026-02-02 05:51:05.110456 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 05:51:05.110493 | orchestrator | Monday 02 February 2026 05:50:36 +0000 (0:00:01.208) 0:17:03.898 ******* 2026-02-02 05:51:05.110505 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.110515 | orchestrator | 2026-02-02 05:51:05.110526 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 05:51:05.110537 | orchestrator | Monday 02 February 2026 05:50:37 +0000 (0:00:01.150) 0:17:05.048 ******* 2026-02-02 05:51:05.110548 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.110558 | orchestrator | 2026-02-02 05:51:05.110569 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 05:51:05.110580 | orchestrator | Monday 02 February 2026 05:50:38 +0000 (0:00:01.133) 0:17:06.182 ******* 2026-02-02 05:51:05.110590 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.110601 | orchestrator | 2026-02-02 05:51:05.110612 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 05:51:05.110623 | orchestrator | Monday 02 February 2026 05:50:39 +0000 (0:00:01.184) 0:17:07.367 ******* 2026-02-02 05:51:05.110633 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.110644 | orchestrator | 2026-02-02 05:51:05.110669 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 05:51:05.110680 | orchestrator | Monday 02 February 2026 05:50:40 +0000 (0:00:01.182) 0:17:08.549 ******* 2026-02-02 05:51:05.110690 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.110701 | orchestrator | 2026-02-02 05:51:05.110712 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 05:51:05.110722 | orchestrator | Monday 02 February 2026 05:50:42 +0000 (0:00:01.148) 0:17:09.698 ******* 2026-02-02 05:51:05.110733 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.110744 | orchestrator | 2026-02-02 05:51:05.110754 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 05:51:05.110765 | orchestrator | Monday 02 February 2026 05:50:43 +0000 (0:00:01.162) 0:17:10.860 ******* 2026-02-02 05:51:05.110776 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.110786 | orchestrator | 2026-02-02 05:51:05.110797 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 05:51:05.110809 | orchestrator | Monday 02 February 2026 05:50:44 +0000 (0:00:01.142) 0:17:12.003 ******* 2026-02-02 05:51:05.110819 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.110830 | orchestrator | 2026-02-02 05:51:05.110841 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 05:51:05.110852 | orchestrator | Monday 02 February 2026 05:50:45 +0000 (0:00:01.162) 0:17:13.165 ******* 2026-02-02 05:51:05.110862 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:05.110873 | orchestrator | 2026-02-02 05:51:05.110884 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 05:51:05.110894 | orchestrator | Monday 02 February 2026 05:50:46 +0000 (0:00:00.825) 0:17:13.991 ******* 2026-02-02 05:51:05.110905 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-02 05:51:05.110917 | orchestrator | 2026-02-02 05:51:05.110928 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 05:51:05.110957 | orchestrator | Monday 02 February 2026 05:50:47 +0000 (0:00:01.132) 0:17:15.124 ******* 2026-02-02 05:51:05.110969 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-02 05:51:05.110980 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-02 05:51:05.110990 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-02 05:51:05.111001 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-02 05:51:05.111012 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-02 05:51:05.111041 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-02 05:51:05.111052 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-02 05:51:05.111063 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-02 05:51:05.111073 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 05:51:05.111093 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 05:51:05.111104 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 05:51:05.111115 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 05:51:05.111126 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 05:51:05.111137 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 05:51:05.111147 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-02 05:51:05.111158 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-02 05:51:05.111169 | orchestrator | 2026-02-02 05:51:05.111180 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 05:51:05.111190 | orchestrator | Monday 02 February 2026 05:50:53 +0000 (0:00:06.441) 0:17:21.566 ******* 2026-02-02 05:51:05.111201 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111212 | orchestrator | 2026-02-02 05:51:05.111223 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 05:51:05.111234 | orchestrator | Monday 02 February 2026 05:50:54 +0000 (0:00:00.762) 0:17:22.329 ******* 2026-02-02 05:51:05.111244 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111255 | orchestrator | 2026-02-02 05:51:05.111266 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 05:51:05.111277 | orchestrator | Monday 02 February 2026 05:50:55 +0000 (0:00:00.836) 0:17:23.166 ******* 2026-02-02 05:51:05.111288 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111299 | orchestrator | 2026-02-02 05:51:05.111309 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 05:51:05.111320 | orchestrator | Monday 02 February 2026 05:50:56 +0000 (0:00:00.792) 0:17:23.958 ******* 2026-02-02 05:51:05.111331 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111341 | orchestrator | 2026-02-02 05:51:05.111352 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 05:51:05.111363 | orchestrator | Monday 02 February 2026 05:50:57 +0000 (0:00:00.831) 0:17:24.790 ******* 2026-02-02 05:51:05.111374 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111385 | orchestrator | 2026-02-02 05:51:05.111396 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 05:51:05.111406 | orchestrator | Monday 02 February 2026 05:50:58 +0000 (0:00:00.853) 0:17:25.643 ******* 2026-02-02 05:51:05.111417 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111428 | orchestrator | 2026-02-02 05:51:05.111439 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 05:51:05.111450 | orchestrator | Monday 02 February 2026 05:50:58 +0000 (0:00:00.759) 0:17:26.403 ******* 2026-02-02 05:51:05.111461 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111472 | orchestrator | 2026-02-02 05:51:05.111483 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 05:51:05.111494 | orchestrator | Monday 02 February 2026 05:50:59 +0000 (0:00:00.823) 0:17:27.226 ******* 2026-02-02 05:51:05.111505 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111515 | orchestrator | 2026-02-02 05:51:05.111531 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 05:51:05.111542 | orchestrator | Monday 02 February 2026 05:51:00 +0000 (0:00:00.744) 0:17:27.971 ******* 2026-02-02 05:51:05.111553 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111564 | orchestrator | 2026-02-02 05:51:05.111575 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 05:51:05.111585 | orchestrator | Monday 02 February 2026 05:51:01 +0000 (0:00:00.757) 0:17:28.728 ******* 2026-02-02 05:51:05.111596 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111606 | orchestrator | 2026-02-02 05:51:05.111617 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 05:51:05.111634 | orchestrator | Monday 02 February 2026 05:51:01 +0000 (0:00:00.765) 0:17:29.493 ******* 2026-02-02 05:51:05.111645 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111656 | orchestrator | 2026-02-02 05:51:05.111667 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 05:51:05.111678 | orchestrator | Monday 02 February 2026 05:51:02 +0000 (0:00:00.758) 0:17:30.252 ******* 2026-02-02 05:51:05.111688 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111699 | orchestrator | 2026-02-02 05:51:05.111710 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 05:51:05.111721 | orchestrator | Monday 02 February 2026 05:51:03 +0000 (0:00:00.823) 0:17:31.076 ******* 2026-02-02 05:51:05.111731 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111742 | orchestrator | 2026-02-02 05:51:05.111753 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 05:51:05.111764 | orchestrator | Monday 02 February 2026 05:51:04 +0000 (0:00:00.852) 0:17:31.928 ******* 2026-02-02 05:51:05.111775 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:05.111786 | orchestrator | 2026-02-02 05:51:05.111797 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 05:51:05.111815 | orchestrator | Monday 02 February 2026 05:51:05 +0000 (0:00:00.751) 0:17:32.680 ******* 2026-02-02 05:51:52.733934 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.734153 | orchestrator | 2026-02-02 05:51:52.734172 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 05:51:52.734185 | orchestrator | Monday 02 February 2026 05:51:05 +0000 (0:00:00.866) 0:17:33.546 ******* 2026-02-02 05:51:52.734197 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.734208 | orchestrator | 2026-02-02 05:51:52.734218 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 05:51:52.734230 | orchestrator | Monday 02 February 2026 05:51:06 +0000 (0:00:00.776) 0:17:34.323 ******* 2026-02-02 05:51:52.734241 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.734251 | orchestrator | 2026-02-02 05:51:52.734263 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 05:51:52.734275 | orchestrator | Monday 02 February 2026 05:51:07 +0000 (0:00:00.826) 0:17:35.150 ******* 2026-02-02 05:51:52.734286 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.734297 | orchestrator | 2026-02-02 05:51:52.734321 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 05:51:52.734332 | orchestrator | Monday 02 February 2026 05:51:08 +0000 (0:00:00.771) 0:17:35.921 ******* 2026-02-02 05:51:52.734343 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.734354 | orchestrator | 2026-02-02 05:51:52.734365 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 05:51:52.734375 | orchestrator | Monday 02 February 2026 05:51:09 +0000 (0:00:00.781) 0:17:36.703 ******* 2026-02-02 05:51:52.734386 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.734397 | orchestrator | 2026-02-02 05:51:52.734408 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 05:51:52.734419 | orchestrator | Monday 02 February 2026 05:51:09 +0000 (0:00:00.868) 0:17:37.572 ******* 2026-02-02 05:51:52.734430 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.734441 | orchestrator | 2026-02-02 05:51:52.734452 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 05:51:52.734463 | orchestrator | Monday 02 February 2026 05:51:10 +0000 (0:00:00.785) 0:17:38.357 ******* 2026-02-02 05:51:52.734474 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-02 05:51:52.734487 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-02 05:51:52.734500 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-02 05:51:52.734513 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.734526 | orchestrator | 2026-02-02 05:51:52.734539 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 05:51:52.734575 | orchestrator | Monday 02 February 2026 05:51:11 +0000 (0:00:01.121) 0:17:39.479 ******* 2026-02-02 05:51:52.734588 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-02 05:51:52.734600 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-02 05:51:52.734613 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-02 05:51:52.734625 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.734638 | orchestrator | 2026-02-02 05:51:52.734651 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 05:51:52.734664 | orchestrator | Monday 02 February 2026 05:51:12 +0000 (0:00:01.072) 0:17:40.552 ******* 2026-02-02 05:51:52.734676 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-02 05:51:52.734689 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-02 05:51:52.734702 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-02 05:51:52.734715 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.734727 | orchestrator | 2026-02-02 05:51:52.734739 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 05:51:52.734752 | orchestrator | Monday 02 February 2026 05:51:14 +0000 (0:00:01.106) 0:17:41.658 ******* 2026-02-02 05:51:52.734765 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.734777 | orchestrator | 2026-02-02 05:51:52.734804 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 05:51:52.734817 | orchestrator | Monday 02 February 2026 05:51:14 +0000 (0:00:00.791) 0:17:42.450 ******* 2026-02-02 05:51:52.734830 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-02 05:51:52.734841 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.734852 | orchestrator | 2026-02-02 05:51:52.734863 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 05:51:52.734874 | orchestrator | Monday 02 February 2026 05:51:15 +0000 (0:00:00.905) 0:17:43.355 ******* 2026-02-02 05:51:52.734884 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:51:52.734895 | orchestrator | 2026-02-02 05:51:52.734906 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-02 05:51:52.734916 | orchestrator | Monday 02 February 2026 05:51:17 +0000 (0:00:01.395) 0:17:44.751 ******* 2026-02-02 05:51:52.734927 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:52.734960 | orchestrator | 2026-02-02 05:51:52.734971 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-02 05:51:52.734982 | orchestrator | Monday 02 February 2026 05:51:18 +0000 (0:00:00.872) 0:17:45.624 ******* 2026-02-02 05:51:52.734993 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-02-02 05:51:52.735004 | orchestrator | 2026-02-02 05:51:52.735016 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-02 05:51:52.735026 | orchestrator | Monday 02 February 2026 05:51:19 +0000 (0:00:01.219) 0:17:46.843 ******* 2026-02-02 05:51:52.735037 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:52.735048 | orchestrator | 2026-02-02 05:51:52.735058 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-02 05:51:52.735069 | orchestrator | Monday 02 February 2026 05:51:22 +0000 (0:00:03.194) 0:17:50.038 ******* 2026-02-02 05:51:52.735080 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.735091 | orchestrator | 2026-02-02 05:51:52.735102 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-02 05:51:52.735132 | orchestrator | Monday 02 February 2026 05:51:23 +0000 (0:00:01.197) 0:17:51.235 ******* 2026-02-02 05:51:52.735143 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:52.735154 | orchestrator | 2026-02-02 05:51:52.735166 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-02 05:51:52.735177 | orchestrator | Monday 02 February 2026 05:51:24 +0000 (0:00:01.190) 0:17:52.426 ******* 2026-02-02 05:51:52.735188 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:52.735198 | orchestrator | 2026-02-02 05:51:52.735218 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-02 05:51:52.735229 | orchestrator | Monday 02 February 2026 05:51:26 +0000 (0:00:01.165) 0:17:53.591 ******* 2026-02-02 05:51:52.735240 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:51:52.735251 | orchestrator | 2026-02-02 05:51:52.735262 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-02 05:51:52.735272 | orchestrator | Monday 02 February 2026 05:51:28 +0000 (0:00:02.042) 0:17:55.634 ******* 2026-02-02 05:51:52.735283 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:52.735294 | orchestrator | 2026-02-02 05:51:52.735305 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-02 05:51:52.735316 | orchestrator | Monday 02 February 2026 05:51:29 +0000 (0:00:01.572) 0:17:57.206 ******* 2026-02-02 05:51:52.735326 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:52.735337 | orchestrator | 2026-02-02 05:51:52.735358 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-02 05:51:52.735370 | orchestrator | Monday 02 February 2026 05:51:31 +0000 (0:00:01.522) 0:17:58.729 ******* 2026-02-02 05:51:52.735381 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:52.735392 | orchestrator | 2026-02-02 05:51:52.735402 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-02 05:51:52.735413 | orchestrator | Monday 02 February 2026 05:51:32 +0000 (0:00:01.480) 0:18:00.209 ******* 2026-02-02 05:51:52.735424 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-02 05:51:52.735434 | orchestrator | 2026-02-02 05:51:52.735445 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-02 05:51:52.735456 | orchestrator | Monday 02 February 2026 05:51:34 +0000 (0:00:01.604) 0:18:01.813 ******* 2026-02-02 05:51:52.735467 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-02 05:51:52.735477 | orchestrator | 2026-02-02 05:51:52.735488 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-02 05:51:52.735499 | orchestrator | Monday 02 February 2026 05:51:35 +0000 (0:00:01.483) 0:18:03.297 ******* 2026-02-02 05:51:52.735509 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 05:51:52.735520 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-02 05:51:52.735531 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-02 05:51:52.735542 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-02 05:51:52.735552 | orchestrator | 2026-02-02 05:51:52.735563 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-02 05:51:52.735574 | orchestrator | Monday 02 February 2026 05:51:39 +0000 (0:00:04.234) 0:18:07.531 ******* 2026-02-02 05:51:52.735584 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:51:52.735595 | orchestrator | 2026-02-02 05:51:52.735606 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-02 05:51:52.735617 | orchestrator | Monday 02 February 2026 05:51:41 +0000 (0:00:02.023) 0:18:09.555 ******* 2026-02-02 05:51:52.735627 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:52.735641 | orchestrator | 2026-02-02 05:51:52.735659 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-02 05:51:52.735685 | orchestrator | Monday 02 February 2026 05:51:43 +0000 (0:00:01.125) 0:18:10.681 ******* 2026-02-02 05:51:52.735709 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:52.735726 | orchestrator | 2026-02-02 05:51:52.735745 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-02 05:51:52.735763 | orchestrator | Monday 02 February 2026 05:51:44 +0000 (0:00:01.136) 0:18:11.818 ******* 2026-02-02 05:51:52.735790 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:52.735808 | orchestrator | 2026-02-02 05:51:52.735826 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-02 05:51:52.735844 | orchestrator | Monday 02 February 2026 05:51:45 +0000 (0:00:01.702) 0:18:13.520 ******* 2026-02-02 05:51:52.735863 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:51:52.735894 | orchestrator | 2026-02-02 05:51:52.735905 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-02 05:51:52.735916 | orchestrator | Monday 02 February 2026 05:51:47 +0000 (0:00:01.472) 0:18:14.993 ******* 2026-02-02 05:51:52.735927 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.735968 | orchestrator | 2026-02-02 05:51:52.735981 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-02 05:51:52.735992 | orchestrator | Monday 02 February 2026 05:51:48 +0000 (0:00:00.786) 0:18:15.779 ******* 2026-02-02 05:51:52.736003 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-02-02 05:51:52.736014 | orchestrator | 2026-02-02 05:51:52.736025 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-02 05:51:52.736035 | orchestrator | Monday 02 February 2026 05:51:49 +0000 (0:00:01.127) 0:18:16.907 ******* 2026-02-02 05:51:52.736046 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.736056 | orchestrator | 2026-02-02 05:51:52.736067 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-02 05:51:52.736078 | orchestrator | Monday 02 February 2026 05:51:50 +0000 (0:00:01.167) 0:18:18.074 ******* 2026-02-02 05:51:52.736089 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:51:52.736099 | orchestrator | 2026-02-02 05:51:52.736110 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-02 05:51:52.736121 | orchestrator | Monday 02 February 2026 05:51:51 +0000 (0:00:01.124) 0:18:19.199 ******* 2026-02-02 05:51:52.736131 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-02-02 05:51:52.736142 | orchestrator | 2026-02-02 05:51:52.736153 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-02 05:51:52.736175 | orchestrator | Monday 02 February 2026 05:51:52 +0000 (0:00:01.104) 0:18:20.303 ******* 2026-02-02 05:53:00.617424 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:53:00.617505 | orchestrator | 2026-02-02 05:53:00.617514 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-02 05:53:00.617521 | orchestrator | Monday 02 February 2026 05:51:55 +0000 (0:00:02.770) 0:18:23.073 ******* 2026-02-02 05:53:00.617527 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:53:00.617533 | orchestrator | 2026-02-02 05:53:00.617539 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-02 05:53:00.617545 | orchestrator | Monday 02 February 2026 05:51:57 +0000 (0:00:01.944) 0:18:25.018 ******* 2026-02-02 05:53:00.617551 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:53:00.617556 | orchestrator | 2026-02-02 05:53:00.617561 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-02 05:53:00.617566 | orchestrator | Monday 02 February 2026 05:51:59 +0000 (0:00:02.391) 0:18:27.410 ******* 2026-02-02 05:53:00.617571 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:53:00.617576 | orchestrator | 2026-02-02 05:53:00.617582 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-02 05:53:00.617587 | orchestrator | Monday 02 February 2026 05:52:02 +0000 (0:00:02.833) 0:18:30.243 ******* 2026-02-02 05:53:00.617592 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-02-02 05:53:00.617598 | orchestrator | 2026-02-02 05:53:00.617603 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-02 05:53:00.617608 | orchestrator | Monday 02 February 2026 05:52:03 +0000 (0:00:01.130) 0:18:31.373 ******* 2026-02-02 05:53:00.617613 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-02 05:53:00.617619 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:53:00.617624 | orchestrator | 2026-02-02 05:53:00.617629 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-02 05:53:00.617634 | orchestrator | Monday 02 February 2026 05:52:26 +0000 (0:00:22.860) 0:18:54.234 ******* 2026-02-02 05:53:00.617639 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:53:00.617644 | orchestrator | 2026-02-02 05:53:00.617665 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-02 05:53:00.617670 | orchestrator | Monday 02 February 2026 05:52:29 +0000 (0:00:02.653) 0:18:56.888 ******* 2026-02-02 05:53:00.617675 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:53:00.617680 | orchestrator | 2026-02-02 05:53:00.617686 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-02 05:53:00.617691 | orchestrator | Monday 02 February 2026 05:52:30 +0000 (0:00:00.764) 0:18:57.652 ******* 2026-02-02 05:53:00.617698 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-02 05:53:00.617704 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-02 05:53:00.617719 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-02 05:53:00.617725 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-02 05:53:00.617732 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-02 05:53:00.617738 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2d80dfd80fe861993a62a686f3742b6b1f75a206'}])  2026-02-02 05:53:00.617744 | orchestrator | 2026-02-02 05:53:00.617760 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-02 05:53:00.617766 | orchestrator | Monday 02 February 2026 05:52:39 +0000 (0:00:09.499) 0:19:07.151 ******* 2026-02-02 05:53:00.617771 | orchestrator | changed: [testbed-node-2] 2026-02-02 05:53:00.617776 | orchestrator | 2026-02-02 05:53:00.617781 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 05:53:00.617786 | orchestrator | Monday 02 February 2026 05:52:41 +0000 (0:00:02.184) 0:19:09.335 ******* 2026-02-02 05:53:00.617791 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 05:53:00.617796 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-02 05:53:00.617802 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-02 05:53:00.617807 | orchestrator | 2026-02-02 05:53:00.617812 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 05:53:00.617822 | orchestrator | Monday 02 February 2026 05:52:43 +0000 (0:00:01.881) 0:19:11.217 ******* 2026-02-02 05:53:00.617827 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 05:53:00.617877 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 05:53:00.617884 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 05:53:00.617889 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:53:00.617894 | orchestrator | 2026-02-02 05:53:00.617900 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-02 05:53:00.617905 | orchestrator | Monday 02 February 2026 05:52:45 +0000 (0:00:01.519) 0:19:12.736 ******* 2026-02-02 05:53:00.617910 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:53:00.617915 | orchestrator | 2026-02-02 05:53:00.617920 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-02 05:53:00.617926 | orchestrator | Monday 02 February 2026 05:52:45 +0000 (0:00:00.770) 0:19:13.507 ******* 2026-02-02 05:53:00.617931 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:53:00.617936 | orchestrator | 2026-02-02 05:53:00.617941 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 05:53:00.617946 | orchestrator | Monday 02 February 2026 05:52:47 +0000 (0:00:01.938) 0:19:15.446 ******* 2026-02-02 05:53:00.617951 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:53:00.617956 | orchestrator | 2026-02-02 05:53:00.617961 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-02 05:53:00.617966 | orchestrator | Monday 02 February 2026 05:52:48 +0000 (0:00:00.799) 0:19:16.246 ******* 2026-02-02 05:53:00.617971 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:53:00.617976 | orchestrator | 2026-02-02 05:53:00.617981 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-02 05:53:00.617986 | orchestrator | Monday 02 February 2026 05:52:49 +0000 (0:00:00.763) 0:19:17.009 ******* 2026-02-02 05:53:00.617993 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:53:00.617999 | orchestrator | 2026-02-02 05:53:00.618005 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-02 05:53:00.618011 | orchestrator | Monday 02 February 2026 05:52:50 +0000 (0:00:00.754) 0:19:17.764 ******* 2026-02-02 05:53:00.618062 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:53:00.618068 | orchestrator | 2026-02-02 05:53:00.618075 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-02 05:53:00.618081 | orchestrator | Monday 02 February 2026 05:52:50 +0000 (0:00:00.774) 0:19:18.538 ******* 2026-02-02 05:53:00.618087 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:53:00.618093 | orchestrator | 2026-02-02 05:53:00.618099 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-02 05:53:00.618105 | orchestrator | Monday 02 February 2026 05:52:51 +0000 (0:00:00.806) 0:19:19.345 ******* 2026-02-02 05:53:00.618111 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:53:00.618117 | orchestrator | 2026-02-02 05:53:00.618128 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-02 05:53:00.618134 | orchestrator | Monday 02 February 2026 05:52:52 +0000 (0:00:00.803) 0:19:20.149 ******* 2026-02-02 05:53:00.618140 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:53:00.618146 | orchestrator | 2026-02-02 05:53:00.618152 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-02-02 05:53:00.618158 | orchestrator | 2026-02-02 05:53:00.618164 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-02-02 05:53:00.618170 | orchestrator | Monday 02 February 2026 05:52:54 +0000 (0:00:01.778) 0:19:21.927 ******* 2026-02-02 05:53:00.618177 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:53:00.618183 | orchestrator | ok: [testbed-node-1] 2026-02-02 05:53:00.618189 | orchestrator | ok: [testbed-node-2] 2026-02-02 05:53:00.618196 | orchestrator | 2026-02-02 05:53:00.618204 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-02 05:53:00.618213 | orchestrator | 2026-02-02 05:53:00.618224 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-02 05:53:00.618247 | orchestrator | Monday 02 February 2026 05:52:55 +0000 (0:00:01.632) 0:19:23.560 ******* 2026-02-02 05:53:00.618255 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:00.618263 | orchestrator | 2026-02-02 05:53:00.618272 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 05:53:00.618280 | orchestrator | Monday 02 February 2026 05:52:57 +0000 (0:00:01.121) 0:19:24.681 ******* 2026-02-02 05:53:00.618288 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:00.618297 | orchestrator | 2026-02-02 05:53:00.618306 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 05:53:00.618315 | orchestrator | Monday 02 February 2026 05:52:58 +0000 (0:00:01.238) 0:19:25.920 ******* 2026-02-02 05:53:00.618323 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:00.618329 | orchestrator | 2026-02-02 05:53:00.618336 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 05:53:00.618342 | orchestrator | Monday 02 February 2026 05:52:59 +0000 (0:00:01.155) 0:19:27.076 ******* 2026-02-02 05:53:00.618348 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:00.618353 | orchestrator | 2026-02-02 05:53:00.618364 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 05:53:46.953993 | orchestrator | Monday 02 February 2026 05:53:00 +0000 (0:00:01.111) 0:19:28.187 ******* 2026-02-02 05:53:46.954200 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.954228 | orchestrator | 2026-02-02 05:53:46.954242 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 05:53:46.954253 | orchestrator | Monday 02 February 2026 05:53:01 +0000 (0:00:01.118) 0:19:29.305 ******* 2026-02-02 05:53:46.954264 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.954275 | orchestrator | 2026-02-02 05:53:46.954286 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 05:53:46.954297 | orchestrator | Monday 02 February 2026 05:53:02 +0000 (0:00:01.097) 0:19:30.403 ******* 2026-02-02 05:53:46.954308 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.954318 | orchestrator | 2026-02-02 05:53:46.954329 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 05:53:46.954340 | orchestrator | Monday 02 February 2026 05:53:03 +0000 (0:00:01.125) 0:19:31.528 ******* 2026-02-02 05:53:46.954351 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.954362 | orchestrator | 2026-02-02 05:53:46.954372 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 05:53:46.954383 | orchestrator | Monday 02 February 2026 05:53:05 +0000 (0:00:01.453) 0:19:32.982 ******* 2026-02-02 05:53:46.954394 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.954404 | orchestrator | 2026-02-02 05:53:46.954415 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 05:53:46.954426 | orchestrator | Monday 02 February 2026 05:53:06 +0000 (0:00:01.125) 0:19:34.107 ******* 2026-02-02 05:53:46.954437 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.954448 | orchestrator | 2026-02-02 05:53:46.954459 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 05:53:46.954470 | orchestrator | Monday 02 February 2026 05:53:07 +0000 (0:00:01.160) 0:19:35.267 ******* 2026-02-02 05:53:46.954481 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.954491 | orchestrator | 2026-02-02 05:53:46.954502 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 05:53:46.954514 | orchestrator | Monday 02 February 2026 05:53:08 +0000 (0:00:01.138) 0:19:36.406 ******* 2026-02-02 05:53:46.954526 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.954539 | orchestrator | 2026-02-02 05:53:46.954551 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 05:53:46.954565 | orchestrator | Monday 02 February 2026 05:53:10 +0000 (0:00:01.219) 0:19:37.626 ******* 2026-02-02 05:53:46.954578 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.954590 | orchestrator | 2026-02-02 05:53:46.954603 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 05:53:46.954639 | orchestrator | Monday 02 February 2026 05:53:11 +0000 (0:00:01.131) 0:19:38.757 ******* 2026-02-02 05:53:46.954651 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.954663 | orchestrator | 2026-02-02 05:53:46.954676 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 05:53:46.954688 | orchestrator | Monday 02 February 2026 05:53:12 +0000 (0:00:01.160) 0:19:39.918 ******* 2026-02-02 05:53:46.954700 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.954712 | orchestrator | 2026-02-02 05:53:46.954725 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 05:53:46.954737 | orchestrator | Monday 02 February 2026 05:53:13 +0000 (0:00:01.151) 0:19:41.070 ******* 2026-02-02 05:53:46.954857 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.954885 | orchestrator | 2026-02-02 05:53:46.954909 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 05:53:46.954927 | orchestrator | Monday 02 February 2026 05:53:14 +0000 (0:00:01.147) 0:19:42.217 ******* 2026-02-02 05:53:46.954946 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.954963 | orchestrator | 2026-02-02 05:53:46.954999 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 05:53:46.955018 | orchestrator | Monday 02 February 2026 05:53:15 +0000 (0:00:01.185) 0:19:43.403 ******* 2026-02-02 05:53:46.955037 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955054 | orchestrator | 2026-02-02 05:53:46.955073 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 05:53:46.955093 | orchestrator | Monday 02 February 2026 05:53:16 +0000 (0:00:01.168) 0:19:44.572 ******* 2026-02-02 05:53:46.955112 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955131 | orchestrator | 2026-02-02 05:53:46.955147 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 05:53:46.955159 | orchestrator | Monday 02 February 2026 05:53:18 +0000 (0:00:01.133) 0:19:45.706 ******* 2026-02-02 05:53:46.955170 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955180 | orchestrator | 2026-02-02 05:53:46.955191 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 05:53:46.955202 | orchestrator | Monday 02 February 2026 05:53:19 +0000 (0:00:01.171) 0:19:46.877 ******* 2026-02-02 05:53:46.955213 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955223 | orchestrator | 2026-02-02 05:53:46.955234 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 05:53:46.955245 | orchestrator | Monday 02 February 2026 05:53:20 +0000 (0:00:01.194) 0:19:48.072 ******* 2026-02-02 05:53:46.955256 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955266 | orchestrator | 2026-02-02 05:53:46.955277 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 05:53:46.955288 | orchestrator | Monday 02 February 2026 05:53:21 +0000 (0:00:01.145) 0:19:49.217 ******* 2026-02-02 05:53:46.955298 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955309 | orchestrator | 2026-02-02 05:53:46.955320 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 05:53:46.955330 | orchestrator | Monday 02 February 2026 05:53:22 +0000 (0:00:01.165) 0:19:50.383 ******* 2026-02-02 05:53:46.955341 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955352 | orchestrator | 2026-02-02 05:53:46.955363 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 05:53:46.955394 | orchestrator | Monday 02 February 2026 05:53:23 +0000 (0:00:01.139) 0:19:51.523 ******* 2026-02-02 05:53:46.955406 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955416 | orchestrator | 2026-02-02 05:53:46.955427 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 05:53:46.955437 | orchestrator | Monday 02 February 2026 05:53:25 +0000 (0:00:01.151) 0:19:52.675 ******* 2026-02-02 05:53:46.955448 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955459 | orchestrator | 2026-02-02 05:53:46.955482 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 05:53:46.955493 | orchestrator | Monday 02 February 2026 05:53:26 +0000 (0:00:01.167) 0:19:53.842 ******* 2026-02-02 05:53:46.955503 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955513 | orchestrator | 2026-02-02 05:53:46.955524 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 05:53:46.955534 | orchestrator | Monday 02 February 2026 05:53:27 +0000 (0:00:01.124) 0:19:54.967 ******* 2026-02-02 05:53:46.955545 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955556 | orchestrator | 2026-02-02 05:53:46.955566 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 05:53:46.955577 | orchestrator | Monday 02 February 2026 05:53:28 +0000 (0:00:01.154) 0:19:56.121 ******* 2026-02-02 05:53:46.955588 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955598 | orchestrator | 2026-02-02 05:53:46.955609 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 05:53:46.955620 | orchestrator | Monday 02 February 2026 05:53:29 +0000 (0:00:01.132) 0:19:57.254 ******* 2026-02-02 05:53:46.955630 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955641 | orchestrator | 2026-02-02 05:53:46.955651 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 05:53:46.955662 | orchestrator | Monday 02 February 2026 05:53:30 +0000 (0:00:01.126) 0:19:58.380 ******* 2026-02-02 05:53:46.955672 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955683 | orchestrator | 2026-02-02 05:53:46.955694 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 05:53:46.955704 | orchestrator | Monday 02 February 2026 05:53:31 +0000 (0:00:01.147) 0:19:59.527 ******* 2026-02-02 05:53:46.955715 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955726 | orchestrator | 2026-02-02 05:53:46.955736 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 05:53:46.955747 | orchestrator | Monday 02 February 2026 05:53:33 +0000 (0:00:01.272) 0:20:00.800 ******* 2026-02-02 05:53:46.955758 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955768 | orchestrator | 2026-02-02 05:53:46.955800 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 05:53:46.955811 | orchestrator | Monday 02 February 2026 05:53:34 +0000 (0:00:01.113) 0:20:01.914 ******* 2026-02-02 05:53:46.955822 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955832 | orchestrator | 2026-02-02 05:53:46.955843 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 05:53:46.955854 | orchestrator | Monday 02 February 2026 05:53:35 +0000 (0:00:01.214) 0:20:03.128 ******* 2026-02-02 05:53:46.955864 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955875 | orchestrator | 2026-02-02 05:53:46.955885 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 05:53:46.955896 | orchestrator | Monday 02 February 2026 05:53:36 +0000 (0:00:01.133) 0:20:04.262 ******* 2026-02-02 05:53:46.955907 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955917 | orchestrator | 2026-02-02 05:53:46.955928 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 05:53:46.955939 | orchestrator | Monday 02 February 2026 05:53:37 +0000 (0:00:01.156) 0:20:05.418 ******* 2026-02-02 05:53:46.955950 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.955960 | orchestrator | 2026-02-02 05:53:46.955971 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 05:53:46.955988 | orchestrator | Monday 02 February 2026 05:53:38 +0000 (0:00:01.134) 0:20:06.553 ******* 2026-02-02 05:53:46.955999 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.956010 | orchestrator | 2026-02-02 05:53:46.956021 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 05:53:46.956032 | orchestrator | Monday 02 February 2026 05:53:40 +0000 (0:00:01.106) 0:20:07.659 ******* 2026-02-02 05:53:46.956052 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.956075 | orchestrator | 2026-02-02 05:53:46.956086 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 05:53:46.956098 | orchestrator | Monday 02 February 2026 05:53:41 +0000 (0:00:01.134) 0:20:08.793 ******* 2026-02-02 05:53:46.956108 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.956119 | orchestrator | 2026-02-02 05:53:46.956130 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 05:53:46.956141 | orchestrator | Monday 02 February 2026 05:53:42 +0000 (0:00:01.161) 0:20:09.954 ******* 2026-02-02 05:53:46.956151 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.956162 | orchestrator | 2026-02-02 05:53:46.956173 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 05:53:46.956184 | orchestrator | Monday 02 February 2026 05:53:43 +0000 (0:00:01.164) 0:20:11.119 ******* 2026-02-02 05:53:46.956194 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.956205 | orchestrator | 2026-02-02 05:53:46.956216 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 05:53:46.956226 | orchestrator | Monday 02 February 2026 05:53:44 +0000 (0:00:01.114) 0:20:12.234 ******* 2026-02-02 05:53:46.956237 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.956247 | orchestrator | 2026-02-02 05:53:46.956258 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 05:53:46.956269 | orchestrator | Monday 02 February 2026 05:53:45 +0000 (0:00:01.136) 0:20:13.370 ******* 2026-02-02 05:53:46.956279 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:53:46.956290 | orchestrator | 2026-02-02 05:53:46.956301 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 05:53:46.956318 | orchestrator | Monday 02 February 2026 05:53:46 +0000 (0:00:01.148) 0:20:14.519 ******* 2026-02-02 05:54:25.436041 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436162 | orchestrator | 2026-02-02 05:54:25.436181 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 05:54:25.436193 | orchestrator | Monday 02 February 2026 05:53:48 +0000 (0:00:01.186) 0:20:15.706 ******* 2026-02-02 05:54:25.436203 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436212 | orchestrator | 2026-02-02 05:54:25.436223 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 05:54:25.436234 | orchestrator | Monday 02 February 2026 05:53:49 +0000 (0:00:01.216) 0:20:16.922 ******* 2026-02-02 05:54:25.436245 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436256 | orchestrator | 2026-02-02 05:54:25.436266 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 05:54:25.436277 | orchestrator | Monday 02 February 2026 05:53:50 +0000 (0:00:01.121) 0:20:18.044 ******* 2026-02-02 05:54:25.436287 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436298 | orchestrator | 2026-02-02 05:54:25.436309 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 05:54:25.436320 | orchestrator | Monday 02 February 2026 05:53:51 +0000 (0:00:01.279) 0:20:19.323 ******* 2026-02-02 05:54:25.436333 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436344 | orchestrator | 2026-02-02 05:54:25.436356 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 05:54:25.436368 | orchestrator | Monday 02 February 2026 05:53:52 +0000 (0:00:01.196) 0:20:20.520 ******* 2026-02-02 05:54:25.436380 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436391 | orchestrator | 2026-02-02 05:54:25.436403 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 05:54:25.436415 | orchestrator | Monday 02 February 2026 05:53:54 +0000 (0:00:01.133) 0:20:21.653 ******* 2026-02-02 05:54:25.436422 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436429 | orchestrator | 2026-02-02 05:54:25.436436 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 05:54:25.436465 | orchestrator | Monday 02 February 2026 05:53:55 +0000 (0:00:01.130) 0:20:22.784 ******* 2026-02-02 05:54:25.436472 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436479 | orchestrator | 2026-02-02 05:54:25.436486 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 05:54:25.436493 | orchestrator | Monday 02 February 2026 05:53:56 +0000 (0:00:01.146) 0:20:23.931 ******* 2026-02-02 05:54:25.436499 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436506 | orchestrator | 2026-02-02 05:54:25.436513 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 05:54:25.436519 | orchestrator | Monday 02 February 2026 05:53:57 +0000 (0:00:01.135) 0:20:25.067 ******* 2026-02-02 05:54:25.436526 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436532 | orchestrator | 2026-02-02 05:54:25.436542 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 05:54:25.436553 | orchestrator | Monday 02 February 2026 05:53:58 +0000 (0:00:01.194) 0:20:26.262 ******* 2026-02-02 05:54:25.436564 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 05:54:25.436575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 05:54:25.436586 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 05:54:25.436597 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436607 | orchestrator | 2026-02-02 05:54:25.436618 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 05:54:25.436630 | orchestrator | Monday 02 February 2026 05:54:00 +0000 (0:00:01.429) 0:20:27.691 ******* 2026-02-02 05:54:25.436642 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 05:54:25.436669 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 05:54:25.436680 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 05:54:25.436692 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436703 | orchestrator | 2026-02-02 05:54:25.436714 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 05:54:25.436752 | orchestrator | Monday 02 February 2026 05:54:01 +0000 (0:00:01.744) 0:20:29.436 ******* 2026-02-02 05:54:25.436765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 05:54:25.436777 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 05:54:25.436789 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 05:54:25.436799 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436811 | orchestrator | 2026-02-02 05:54:25.436822 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 05:54:25.436833 | orchestrator | Monday 02 February 2026 05:54:03 +0000 (0:00:01.767) 0:20:31.204 ******* 2026-02-02 05:54:25.436845 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436857 | orchestrator | 2026-02-02 05:54:25.436869 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 05:54:25.436880 | orchestrator | Monday 02 February 2026 05:54:04 +0000 (0:00:01.199) 0:20:32.404 ******* 2026-02-02 05:54:25.436892 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-02 05:54:25.436904 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436914 | orchestrator | 2026-02-02 05:54:25.436925 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 05:54:25.436936 | orchestrator | Monday 02 February 2026 05:54:06 +0000 (0:00:01.236) 0:20:33.640 ******* 2026-02-02 05:54:25.436947 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.436957 | orchestrator | 2026-02-02 05:54:25.436968 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-02 05:54:25.436980 | orchestrator | Monday 02 February 2026 05:54:07 +0000 (0:00:01.136) 0:20:34.777 ******* 2026-02-02 05:54:25.436991 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 05:54:25.437003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 05:54:25.437014 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 05:54:25.437055 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.437066 | orchestrator | 2026-02-02 05:54:25.437077 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-02 05:54:25.437088 | orchestrator | Monday 02 February 2026 05:54:08 +0000 (0:00:01.406) 0:20:36.183 ******* 2026-02-02 05:54:25.437099 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.437110 | orchestrator | 2026-02-02 05:54:25.437123 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-02 05:54:25.437134 | orchestrator | Monday 02 February 2026 05:54:09 +0000 (0:00:01.129) 0:20:37.313 ******* 2026-02-02 05:54:25.437145 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.437156 | orchestrator | 2026-02-02 05:54:25.437167 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-02 05:54:25.437179 | orchestrator | Monday 02 February 2026 05:54:10 +0000 (0:00:01.130) 0:20:38.444 ******* 2026-02-02 05:54:25.437191 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.437201 | orchestrator | 2026-02-02 05:54:25.437211 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-02 05:54:25.437221 | orchestrator | Monday 02 February 2026 05:54:11 +0000 (0:00:01.130) 0:20:39.575 ******* 2026-02-02 05:54:25.437232 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:54:25.437242 | orchestrator | 2026-02-02 05:54:25.437254 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-02 05:54:25.437265 | orchestrator | 2026-02-02 05:54:25.437277 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-02 05:54:25.437288 | orchestrator | Monday 02 February 2026 05:54:13 +0000 (0:00:01.033) 0:20:40.608 ******* 2026-02-02 05:54:25.437299 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437310 | orchestrator | 2026-02-02 05:54:25.437321 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 05:54:25.437332 | orchestrator | Monday 02 February 2026 05:54:13 +0000 (0:00:00.780) 0:20:41.389 ******* 2026-02-02 05:54:25.437344 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437355 | orchestrator | 2026-02-02 05:54:25.437366 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 05:54:25.437378 | orchestrator | Monday 02 February 2026 05:54:14 +0000 (0:00:00.943) 0:20:42.333 ******* 2026-02-02 05:54:25.437389 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437400 | orchestrator | 2026-02-02 05:54:25.437411 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 05:54:25.437422 | orchestrator | Monday 02 February 2026 05:54:15 +0000 (0:00:00.846) 0:20:43.179 ******* 2026-02-02 05:54:25.437433 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437443 | orchestrator | 2026-02-02 05:54:25.437454 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 05:54:25.437465 | orchestrator | Monday 02 February 2026 05:54:16 +0000 (0:00:00.784) 0:20:43.963 ******* 2026-02-02 05:54:25.437474 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437484 | orchestrator | 2026-02-02 05:54:25.437493 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 05:54:25.437505 | orchestrator | Monday 02 February 2026 05:54:17 +0000 (0:00:00.756) 0:20:44.720 ******* 2026-02-02 05:54:25.437516 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437527 | orchestrator | 2026-02-02 05:54:25.437537 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 05:54:25.437548 | orchestrator | Monday 02 February 2026 05:54:17 +0000 (0:00:00.812) 0:20:45.533 ******* 2026-02-02 05:54:25.437559 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437570 | orchestrator | 2026-02-02 05:54:25.437582 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 05:54:25.437593 | orchestrator | Monday 02 February 2026 05:54:18 +0000 (0:00:00.773) 0:20:46.306 ******* 2026-02-02 05:54:25.437604 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437623 | orchestrator | 2026-02-02 05:54:25.437641 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 05:54:25.437653 | orchestrator | Monday 02 February 2026 05:54:19 +0000 (0:00:00.772) 0:20:47.079 ******* 2026-02-02 05:54:25.437664 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437676 | orchestrator | 2026-02-02 05:54:25.437687 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 05:54:25.437698 | orchestrator | Monday 02 February 2026 05:54:20 +0000 (0:00:00.819) 0:20:47.899 ******* 2026-02-02 05:54:25.437709 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437720 | orchestrator | 2026-02-02 05:54:25.437753 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 05:54:25.437764 | orchestrator | Monday 02 February 2026 05:54:21 +0000 (0:00:00.822) 0:20:48.721 ******* 2026-02-02 05:54:25.437775 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437787 | orchestrator | 2026-02-02 05:54:25.437798 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 05:54:25.437809 | orchestrator | Monday 02 February 2026 05:54:21 +0000 (0:00:00.790) 0:20:49.511 ******* 2026-02-02 05:54:25.437820 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437831 | orchestrator | 2026-02-02 05:54:25.437842 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 05:54:25.437854 | orchestrator | Monday 02 February 2026 05:54:22 +0000 (0:00:00.834) 0:20:50.346 ******* 2026-02-02 05:54:25.437865 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437876 | orchestrator | 2026-02-02 05:54:25.437888 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 05:54:25.437899 | orchestrator | Monday 02 February 2026 05:54:23 +0000 (0:00:00.802) 0:20:51.149 ******* 2026-02-02 05:54:25.437910 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437921 | orchestrator | 2026-02-02 05:54:25.437933 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 05:54:25.437944 | orchestrator | Monday 02 February 2026 05:54:24 +0000 (0:00:01.069) 0:20:52.218 ******* 2026-02-02 05:54:25.437955 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:25.437966 | orchestrator | 2026-02-02 05:54:25.437977 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 05:54:25.437996 | orchestrator | Monday 02 February 2026 05:54:25 +0000 (0:00:00.788) 0:20:53.007 ******* 2026-02-02 05:54:58.563218 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.563331 | orchestrator | 2026-02-02 05:54:58.563348 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 05:54:58.563361 | orchestrator | Monday 02 February 2026 05:54:26 +0000 (0:00:00.788) 0:20:53.795 ******* 2026-02-02 05:54:58.563373 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.563383 | orchestrator | 2026-02-02 05:54:58.563395 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 05:54:58.563406 | orchestrator | Monday 02 February 2026 05:54:26 +0000 (0:00:00.783) 0:20:54.579 ******* 2026-02-02 05:54:58.563417 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.563428 | orchestrator | 2026-02-02 05:54:58.563438 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 05:54:58.563449 | orchestrator | Monday 02 February 2026 05:54:27 +0000 (0:00:00.870) 0:20:55.449 ******* 2026-02-02 05:54:58.563460 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.563470 | orchestrator | 2026-02-02 05:54:58.563481 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 05:54:58.563493 | orchestrator | Monday 02 February 2026 05:54:28 +0000 (0:00:00.768) 0:20:56.218 ******* 2026-02-02 05:54:58.563503 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.563514 | orchestrator | 2026-02-02 05:54:58.563525 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 05:54:58.563536 | orchestrator | Monday 02 February 2026 05:54:29 +0000 (0:00:00.774) 0:20:56.993 ******* 2026-02-02 05:54:58.563546 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.563584 | orchestrator | 2026-02-02 05:54:58.563595 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 05:54:58.563606 | orchestrator | Monday 02 February 2026 05:54:30 +0000 (0:00:00.778) 0:20:57.772 ******* 2026-02-02 05:54:58.563617 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.563627 | orchestrator | 2026-02-02 05:54:58.563638 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 05:54:58.563648 | orchestrator | Monday 02 February 2026 05:54:31 +0000 (0:00:00.814) 0:20:58.586 ******* 2026-02-02 05:54:58.563659 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.563670 | orchestrator | 2026-02-02 05:54:58.563680 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 05:54:58.563720 | orchestrator | Monday 02 February 2026 05:54:31 +0000 (0:00:00.808) 0:20:59.395 ******* 2026-02-02 05:54:58.563738 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.563751 | orchestrator | 2026-02-02 05:54:58.563764 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 05:54:58.563777 | orchestrator | Monday 02 February 2026 05:54:32 +0000 (0:00:00.803) 0:21:00.198 ******* 2026-02-02 05:54:58.563789 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.563801 | orchestrator | 2026-02-02 05:54:58.563813 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 05:54:58.563826 | orchestrator | Monday 02 February 2026 05:54:33 +0000 (0:00:00.786) 0:21:00.985 ******* 2026-02-02 05:54:58.563839 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.563851 | orchestrator | 2026-02-02 05:54:58.563865 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 05:54:58.563877 | orchestrator | Monday 02 February 2026 05:54:34 +0000 (0:00:00.822) 0:21:01.807 ******* 2026-02-02 05:54:58.563889 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.563901 | orchestrator | 2026-02-02 05:54:58.563914 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 05:54:58.563927 | orchestrator | Monday 02 February 2026 05:54:34 +0000 (0:00:00.766) 0:21:02.574 ******* 2026-02-02 05:54:58.563939 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.563952 | orchestrator | 2026-02-02 05:54:58.563965 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 05:54:58.563992 | orchestrator | Monday 02 February 2026 05:54:35 +0000 (0:00:00.778) 0:21:03.353 ******* 2026-02-02 05:54:58.564005 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564017 | orchestrator | 2026-02-02 05:54:58.564030 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 05:54:58.564042 | orchestrator | Monday 02 February 2026 05:54:36 +0000 (0:00:00.807) 0:21:04.160 ******* 2026-02-02 05:54:58.564054 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564066 | orchestrator | 2026-02-02 05:54:58.564078 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 05:54:58.564091 | orchestrator | Monday 02 February 2026 05:54:37 +0000 (0:00:00.833) 0:21:04.994 ******* 2026-02-02 05:54:58.564103 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564114 | orchestrator | 2026-02-02 05:54:58.564124 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 05:54:58.564135 | orchestrator | Monday 02 February 2026 05:54:38 +0000 (0:00:00.778) 0:21:05.773 ******* 2026-02-02 05:54:58.564146 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564156 | orchestrator | 2026-02-02 05:54:58.564167 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 05:54:58.564178 | orchestrator | Monday 02 February 2026 05:54:38 +0000 (0:00:00.757) 0:21:06.530 ******* 2026-02-02 05:54:58.564188 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564199 | orchestrator | 2026-02-02 05:54:58.564210 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 05:54:58.564221 | orchestrator | Monday 02 February 2026 05:54:39 +0000 (0:00:00.779) 0:21:07.309 ******* 2026-02-02 05:54:58.564240 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564251 | orchestrator | 2026-02-02 05:54:58.564262 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 05:54:58.564273 | orchestrator | Monday 02 February 2026 05:54:40 +0000 (0:00:00.777) 0:21:08.087 ******* 2026-02-02 05:54:58.564283 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564294 | orchestrator | 2026-02-02 05:54:58.564305 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 05:54:58.564315 | orchestrator | Monday 02 February 2026 05:54:41 +0000 (0:00:00.792) 0:21:08.879 ******* 2026-02-02 05:54:58.564326 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564337 | orchestrator | 2026-02-02 05:54:58.564365 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 05:54:58.564376 | orchestrator | Monday 02 February 2026 05:54:42 +0000 (0:00:00.774) 0:21:09.654 ******* 2026-02-02 05:54:58.564391 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564410 | orchestrator | 2026-02-02 05:54:58.564428 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 05:54:58.564445 | orchestrator | Monday 02 February 2026 05:54:42 +0000 (0:00:00.781) 0:21:10.435 ******* 2026-02-02 05:54:58.564462 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564481 | orchestrator | 2026-02-02 05:54:58.564498 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 05:54:58.564516 | orchestrator | Monday 02 February 2026 05:54:43 +0000 (0:00:00.965) 0:21:11.401 ******* 2026-02-02 05:54:58.564534 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564553 | orchestrator | 2026-02-02 05:54:58.564573 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 05:54:58.564591 | orchestrator | Monday 02 February 2026 05:54:44 +0000 (0:00:00.817) 0:21:12.219 ******* 2026-02-02 05:54:58.564611 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564625 | orchestrator | 2026-02-02 05:54:58.564636 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 05:54:58.564646 | orchestrator | Monday 02 February 2026 05:54:45 +0000 (0:00:00.759) 0:21:12.978 ******* 2026-02-02 05:54:58.564657 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564668 | orchestrator | 2026-02-02 05:54:58.564679 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 05:54:58.564711 | orchestrator | Monday 02 February 2026 05:54:46 +0000 (0:00:00.881) 0:21:13.859 ******* 2026-02-02 05:54:58.564723 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564733 | orchestrator | 2026-02-02 05:54:58.564744 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 05:54:58.564754 | orchestrator | Monday 02 February 2026 05:54:47 +0000 (0:00:00.844) 0:21:14.704 ******* 2026-02-02 05:54:58.564765 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564776 | orchestrator | 2026-02-02 05:54:58.564786 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 05:54:58.564797 | orchestrator | Monday 02 February 2026 05:54:47 +0000 (0:00:00.820) 0:21:15.525 ******* 2026-02-02 05:54:58.564808 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564819 | orchestrator | 2026-02-02 05:54:58.564829 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 05:54:58.564840 | orchestrator | Monday 02 February 2026 05:54:48 +0000 (0:00:00.869) 0:21:16.394 ******* 2026-02-02 05:54:58.564850 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564861 | orchestrator | 2026-02-02 05:54:58.564872 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 05:54:58.564883 | orchestrator | Monday 02 February 2026 05:54:49 +0000 (0:00:00.796) 0:21:17.191 ******* 2026-02-02 05:54:58.564893 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564904 | orchestrator | 2026-02-02 05:54:58.564915 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 05:54:58.564935 | orchestrator | Monday 02 February 2026 05:54:50 +0000 (0:00:01.034) 0:21:18.225 ******* 2026-02-02 05:54:58.564946 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.564957 | orchestrator | 2026-02-02 05:54:58.564968 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 05:54:58.564979 | orchestrator | Monday 02 February 2026 05:54:51 +0000 (0:00:00.818) 0:21:19.043 ******* 2026-02-02 05:54:58.564989 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.565000 | orchestrator | 2026-02-02 05:54:58.565017 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 05:54:58.565029 | orchestrator | Monday 02 February 2026 05:54:52 +0000 (0:00:00.942) 0:21:19.985 ******* 2026-02-02 05:54:58.565039 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.565050 | orchestrator | 2026-02-02 05:54:58.565061 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 05:54:58.565072 | orchestrator | Monday 02 February 2026 05:54:53 +0000 (0:00:00.834) 0:21:20.820 ******* 2026-02-02 05:54:58.565082 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.565093 | orchestrator | 2026-02-02 05:54:58.565104 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 05:54:58.565116 | orchestrator | Monday 02 February 2026 05:54:54 +0000 (0:00:00.801) 0:21:21.622 ******* 2026-02-02 05:54:58.565127 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.565138 | orchestrator | 2026-02-02 05:54:58.565149 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 05:54:58.565160 | orchestrator | Monday 02 February 2026 05:54:54 +0000 (0:00:00.785) 0:21:22.408 ******* 2026-02-02 05:54:58.565170 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.565181 | orchestrator | 2026-02-02 05:54:58.565192 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 05:54:58.565203 | orchestrator | Monday 02 February 2026 05:54:55 +0000 (0:00:00.910) 0:21:23.318 ******* 2026-02-02 05:54:58.565225 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.565236 | orchestrator | 2026-02-02 05:54:58.565247 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 05:54:58.565257 | orchestrator | Monday 02 February 2026 05:54:56 +0000 (0:00:00.792) 0:21:24.110 ******* 2026-02-02 05:54:58.565268 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:54:58.565279 | orchestrator | 2026-02-02 05:54:58.565289 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 05:54:58.565300 | orchestrator | Monday 02 February 2026 05:54:57 +0000 (0:00:00.805) 0:21:24.916 ******* 2026-02-02 05:54:58.565311 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-02 05:54:58.565322 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-02 05:54:58.565342 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-02 05:55:31.412965 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:55:31.413093 | orchestrator | 2026-02-02 05:55:31.413117 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 05:55:31.413134 | orchestrator | Monday 02 February 2026 05:54:58 +0000 (0:00:01.217) 0:21:26.134 ******* 2026-02-02 05:55:31.413149 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-02 05:55:31.413166 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-02 05:55:31.413181 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-02 05:55:31.413197 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:55:31.413211 | orchestrator | 2026-02-02 05:55:31.413227 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 05:55:31.413243 | orchestrator | Monday 02 February 2026 05:54:59 +0000 (0:00:01.125) 0:21:27.259 ******* 2026-02-02 05:55:31.413258 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-02 05:55:31.413273 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-02 05:55:31.413288 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-02 05:55:31.413333 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:55:31.413349 | orchestrator | 2026-02-02 05:55:31.413363 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 05:55:31.413378 | orchestrator | Monday 02 February 2026 05:55:00 +0000 (0:00:01.140) 0:21:28.399 ******* 2026-02-02 05:55:31.413393 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:55:31.413407 | orchestrator | 2026-02-02 05:55:31.413422 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 05:55:31.413438 | orchestrator | Monday 02 February 2026 05:55:01 +0000 (0:00:00.780) 0:21:29.180 ******* 2026-02-02 05:55:31.413454 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-02 05:55:31.413469 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:55:31.413484 | orchestrator | 2026-02-02 05:55:31.413499 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 05:55:31.413514 | orchestrator | Monday 02 February 2026 05:55:02 +0000 (0:00:00.940) 0:21:30.121 ******* 2026-02-02 05:55:31.413529 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:55:31.413544 | orchestrator | 2026-02-02 05:55:31.413557 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-02 05:55:31.413571 | orchestrator | Monday 02 February 2026 05:55:03 +0000 (0:00:00.839) 0:21:30.961 ******* 2026-02-02 05:55:31.413585 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 05:55:31.413600 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 05:55:31.413615 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 05:55:31.413629 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:55:31.413729 | orchestrator | 2026-02-02 05:55:31.413746 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-02 05:55:31.413761 | orchestrator | Monday 02 February 2026 05:55:05 +0000 (0:00:01.659) 0:21:32.621 ******* 2026-02-02 05:55:31.413776 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:55:31.413791 | orchestrator | 2026-02-02 05:55:31.413808 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-02 05:55:31.413823 | orchestrator | Monday 02 February 2026 05:55:05 +0000 (0:00:00.892) 0:21:33.514 ******* 2026-02-02 05:55:31.413838 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:55:31.413847 | orchestrator | 2026-02-02 05:55:31.413855 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-02 05:55:31.413864 | orchestrator | Monday 02 February 2026 05:55:06 +0000 (0:00:00.913) 0:21:34.427 ******* 2026-02-02 05:55:31.413873 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:55:31.413881 | orchestrator | 2026-02-02 05:55:31.413904 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-02 05:55:31.413913 | orchestrator | Monday 02 February 2026 05:55:07 +0000 (0:00:00.935) 0:21:35.362 ******* 2026-02-02 05:55:31.413922 | orchestrator | skipping: [testbed-node-1] 2026-02-02 05:55:31.413931 | orchestrator | 2026-02-02 05:55:31.413940 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-02 05:55:31.413948 | orchestrator | 2026-02-02 05:55:31.413956 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-02 05:55:31.413965 | orchestrator | Monday 02 February 2026 05:55:08 +0000 (0:00:01.057) 0:21:36.420 ******* 2026-02-02 05:55:31.413974 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.413982 | orchestrator | 2026-02-02 05:55:31.413991 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 05:55:31.414000 | orchestrator | Monday 02 February 2026 05:55:09 +0000 (0:00:00.829) 0:21:37.249 ******* 2026-02-02 05:55:31.414008 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414069 | orchestrator | 2026-02-02 05:55:31.414079 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 05:55:31.414087 | orchestrator | Monday 02 February 2026 05:55:10 +0000 (0:00:00.823) 0:21:38.073 ******* 2026-02-02 05:55:31.414096 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414114 | orchestrator | 2026-02-02 05:55:31.414123 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 05:55:31.414131 | orchestrator | Monday 02 February 2026 05:55:11 +0000 (0:00:00.853) 0:21:38.926 ******* 2026-02-02 05:55:31.414140 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414149 | orchestrator | 2026-02-02 05:55:31.414157 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 05:55:31.414166 | orchestrator | Monday 02 February 2026 05:55:12 +0000 (0:00:00.882) 0:21:39.809 ******* 2026-02-02 05:55:31.414174 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414183 | orchestrator | 2026-02-02 05:55:31.414192 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 05:55:31.414200 | orchestrator | Monday 02 February 2026 05:55:13 +0000 (0:00:00.792) 0:21:40.601 ******* 2026-02-02 05:55:31.414208 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414217 | orchestrator | 2026-02-02 05:55:31.414226 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 05:55:31.414250 | orchestrator | Monday 02 February 2026 05:55:13 +0000 (0:00:00.775) 0:21:41.377 ******* 2026-02-02 05:55:31.414260 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414269 | orchestrator | 2026-02-02 05:55:31.414278 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 05:55:31.414286 | orchestrator | Monday 02 February 2026 05:55:14 +0000 (0:00:00.820) 0:21:42.198 ******* 2026-02-02 05:55:31.414295 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414304 | orchestrator | 2026-02-02 05:55:31.414312 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 05:55:31.414321 | orchestrator | Monday 02 February 2026 05:55:15 +0000 (0:00:00.929) 0:21:43.128 ******* 2026-02-02 05:55:31.414330 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414338 | orchestrator | 2026-02-02 05:55:31.414347 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 05:55:31.414356 | orchestrator | Monday 02 February 2026 05:55:16 +0000 (0:00:00.777) 0:21:43.905 ******* 2026-02-02 05:55:31.414364 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414373 | orchestrator | 2026-02-02 05:55:31.414382 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 05:55:31.414390 | orchestrator | Monday 02 February 2026 05:55:17 +0000 (0:00:00.798) 0:21:44.704 ******* 2026-02-02 05:55:31.414399 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414408 | orchestrator | 2026-02-02 05:55:31.414416 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 05:55:31.414425 | orchestrator | Monday 02 February 2026 05:55:17 +0000 (0:00:00.805) 0:21:45.510 ******* 2026-02-02 05:55:31.414434 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414442 | orchestrator | 2026-02-02 05:55:31.414451 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 05:55:31.414460 | orchestrator | Monday 02 February 2026 05:55:18 +0000 (0:00:00.770) 0:21:46.281 ******* 2026-02-02 05:55:31.414468 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414477 | orchestrator | 2026-02-02 05:55:31.414486 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 05:55:31.414494 | orchestrator | Monday 02 February 2026 05:55:19 +0000 (0:00:00.785) 0:21:47.067 ******* 2026-02-02 05:55:31.414503 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414512 | orchestrator | 2026-02-02 05:55:31.414520 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 05:55:31.414529 | orchestrator | Monday 02 February 2026 05:55:20 +0000 (0:00:00.805) 0:21:47.872 ******* 2026-02-02 05:55:31.414538 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414546 | orchestrator | 2026-02-02 05:55:31.414555 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 05:55:31.414564 | orchestrator | Monday 02 February 2026 05:55:21 +0000 (0:00:00.823) 0:21:48.696 ******* 2026-02-02 05:55:31.414578 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414587 | orchestrator | 2026-02-02 05:55:31.414596 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 05:55:31.414605 | orchestrator | Monday 02 February 2026 05:55:21 +0000 (0:00:00.818) 0:21:49.514 ******* 2026-02-02 05:55:31.414620 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414634 | orchestrator | 2026-02-02 05:55:31.414648 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 05:55:31.414712 | orchestrator | Monday 02 February 2026 05:55:22 +0000 (0:00:00.823) 0:21:50.338 ******* 2026-02-02 05:55:31.414726 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414740 | orchestrator | 2026-02-02 05:55:31.414753 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 05:55:31.414768 | orchestrator | Monday 02 February 2026 05:55:23 +0000 (0:00:01.032) 0:21:51.370 ******* 2026-02-02 05:55:31.414781 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414795 | orchestrator | 2026-02-02 05:55:31.414818 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 05:55:31.414835 | orchestrator | Monday 02 February 2026 05:55:24 +0000 (0:00:00.873) 0:21:52.244 ******* 2026-02-02 05:55:31.414850 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414865 | orchestrator | 2026-02-02 05:55:31.414879 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 05:55:31.414893 | orchestrator | Monday 02 February 2026 05:55:25 +0000 (0:00:00.838) 0:21:53.083 ******* 2026-02-02 05:55:31.414903 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414911 | orchestrator | 2026-02-02 05:55:31.414920 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 05:55:31.414928 | orchestrator | Monday 02 February 2026 05:55:26 +0000 (0:00:01.025) 0:21:54.108 ******* 2026-02-02 05:55:31.414937 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414945 | orchestrator | 2026-02-02 05:55:31.414954 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 05:55:31.414962 | orchestrator | Monday 02 February 2026 05:55:27 +0000 (0:00:00.789) 0:21:54.898 ******* 2026-02-02 05:55:31.414971 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.414980 | orchestrator | 2026-02-02 05:55:31.414988 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 05:55:31.414997 | orchestrator | Monday 02 February 2026 05:55:28 +0000 (0:00:00.799) 0:21:55.697 ******* 2026-02-02 05:55:31.415005 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.415014 | orchestrator | 2026-02-02 05:55:31.415022 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 05:55:31.415031 | orchestrator | Monday 02 February 2026 05:55:28 +0000 (0:00:00.805) 0:21:56.503 ******* 2026-02-02 05:55:31.415039 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.415048 | orchestrator | 2026-02-02 05:55:31.415057 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 05:55:31.415065 | orchestrator | Monday 02 February 2026 05:55:29 +0000 (0:00:00.857) 0:21:57.361 ******* 2026-02-02 05:55:31.415074 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:55:31.415082 | orchestrator | 2026-02-02 05:55:31.415092 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 05:55:31.415106 | orchestrator | Monday 02 February 2026 05:55:30 +0000 (0:00:00.826) 0:21:58.188 ******* 2026-02-02 05:55:31.415134 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.403196 | orchestrator | 2026-02-02 05:56:03.403339 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 05:56:03.403367 | orchestrator | Monday 02 February 2026 05:55:31 +0000 (0:00:00.796) 0:21:58.985 ******* 2026-02-02 05:56:03.403387 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.403407 | orchestrator | 2026-02-02 05:56:03.403425 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 05:56:03.403441 | orchestrator | Monday 02 February 2026 05:55:32 +0000 (0:00:00.865) 0:21:59.850 ******* 2026-02-02 05:56:03.403490 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.403508 | orchestrator | 2026-02-02 05:56:03.403524 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 05:56:03.403541 | orchestrator | Monday 02 February 2026 05:55:33 +0000 (0:00:00.848) 0:22:00.699 ******* 2026-02-02 05:56:03.403558 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.403574 | orchestrator | 2026-02-02 05:56:03.403591 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 05:56:03.403609 | orchestrator | Monday 02 February 2026 05:55:33 +0000 (0:00:00.838) 0:22:01.537 ******* 2026-02-02 05:56:03.403676 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.403699 | orchestrator | 2026-02-02 05:56:03.403717 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 05:56:03.403735 | orchestrator | Monday 02 February 2026 05:55:34 +0000 (0:00:00.823) 0:22:02.361 ******* 2026-02-02 05:56:03.403754 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.403771 | orchestrator | 2026-02-02 05:56:03.403790 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 05:56:03.403809 | orchestrator | Monday 02 February 2026 05:55:35 +0000 (0:00:01.009) 0:22:03.371 ******* 2026-02-02 05:56:03.403828 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.403841 | orchestrator | 2026-02-02 05:56:03.403853 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 05:56:03.403865 | orchestrator | Monday 02 February 2026 05:55:36 +0000 (0:00:00.793) 0:22:04.164 ******* 2026-02-02 05:56:03.403876 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.403888 | orchestrator | 2026-02-02 05:56:03.403899 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 05:56:03.403911 | orchestrator | Monday 02 February 2026 05:55:37 +0000 (0:00:00.772) 0:22:04.937 ******* 2026-02-02 05:56:03.403922 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.403932 | orchestrator | 2026-02-02 05:56:03.403941 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 05:56:03.403951 | orchestrator | Monday 02 February 2026 05:55:38 +0000 (0:00:00.768) 0:22:05.705 ******* 2026-02-02 05:56:03.403961 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.403970 | orchestrator | 2026-02-02 05:56:03.403980 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 05:56:03.403990 | orchestrator | Monday 02 February 2026 05:55:38 +0000 (0:00:00.791) 0:22:06.497 ******* 2026-02-02 05:56:03.403999 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404009 | orchestrator | 2026-02-02 05:56:03.404018 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 05:56:03.404028 | orchestrator | Monday 02 February 2026 05:55:39 +0000 (0:00:00.803) 0:22:07.301 ******* 2026-02-02 05:56:03.404037 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404047 | orchestrator | 2026-02-02 05:56:03.404056 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 05:56:03.404066 | orchestrator | Monday 02 February 2026 05:55:40 +0000 (0:00:00.823) 0:22:08.124 ******* 2026-02-02 05:56:03.404075 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404085 | orchestrator | 2026-02-02 05:56:03.404094 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 05:56:03.404121 | orchestrator | Monday 02 February 2026 05:55:41 +0000 (0:00:00.796) 0:22:08.921 ******* 2026-02-02 05:56:03.404131 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404141 | orchestrator | 2026-02-02 05:56:03.404150 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 05:56:03.404160 | orchestrator | Monday 02 February 2026 05:55:42 +0000 (0:00:00.801) 0:22:09.722 ******* 2026-02-02 05:56:03.404170 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404180 | orchestrator | 2026-02-02 05:56:03.404190 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 05:56:03.404210 | orchestrator | Monday 02 February 2026 05:55:42 +0000 (0:00:00.784) 0:22:10.506 ******* 2026-02-02 05:56:03.404220 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404229 | orchestrator | 2026-02-02 05:56:03.404239 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 05:56:03.404248 | orchestrator | Monday 02 February 2026 05:55:43 +0000 (0:00:00.794) 0:22:11.300 ******* 2026-02-02 05:56:03.404258 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404267 | orchestrator | 2026-02-02 05:56:03.404277 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 05:56:03.404287 | orchestrator | Monday 02 February 2026 05:55:44 +0000 (0:00:00.789) 0:22:12.090 ******* 2026-02-02 05:56:03.404296 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404306 | orchestrator | 2026-02-02 05:56:03.404315 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 05:56:03.404325 | orchestrator | Monday 02 February 2026 05:55:45 +0000 (0:00:00.787) 0:22:12.878 ******* 2026-02-02 05:56:03.404334 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404344 | orchestrator | 2026-02-02 05:56:03.404354 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 05:56:03.404363 | orchestrator | Monday 02 February 2026 05:55:46 +0000 (0:00:00.850) 0:22:13.728 ******* 2026-02-02 05:56:03.404372 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404382 | orchestrator | 2026-02-02 05:56:03.404392 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 05:56:03.404401 | orchestrator | Monday 02 February 2026 05:55:47 +0000 (0:00:00.921) 0:22:14.650 ******* 2026-02-02 05:56:03.404431 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404441 | orchestrator | 2026-02-02 05:56:03.404451 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 05:56:03.404460 | orchestrator | Monday 02 February 2026 05:55:47 +0000 (0:00:00.787) 0:22:15.437 ******* 2026-02-02 05:56:03.404470 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404479 | orchestrator | 2026-02-02 05:56:03.404489 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 05:56:03.404498 | orchestrator | Monday 02 February 2026 05:55:48 +0000 (0:00:00.936) 0:22:16.374 ******* 2026-02-02 05:56:03.404508 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404518 | orchestrator | 2026-02-02 05:56:03.404527 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 05:56:03.404537 | orchestrator | Monday 02 February 2026 05:55:49 +0000 (0:00:00.919) 0:22:17.293 ******* 2026-02-02 05:56:03.404546 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404556 | orchestrator | 2026-02-02 05:56:03.404565 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 05:56:03.404576 | orchestrator | Monday 02 February 2026 05:55:50 +0000 (0:00:00.846) 0:22:18.140 ******* 2026-02-02 05:56:03.404585 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404595 | orchestrator | 2026-02-02 05:56:03.404605 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 05:56:03.404614 | orchestrator | Monday 02 February 2026 05:55:51 +0000 (0:00:00.840) 0:22:18.981 ******* 2026-02-02 05:56:03.404647 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404666 | orchestrator | 2026-02-02 05:56:03.404684 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 05:56:03.404700 | orchestrator | Monday 02 February 2026 05:55:52 +0000 (0:00:00.793) 0:22:19.774 ******* 2026-02-02 05:56:03.404716 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404727 | orchestrator | 2026-02-02 05:56:03.404737 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 05:56:03.404746 | orchestrator | Monday 02 February 2026 05:55:52 +0000 (0:00:00.807) 0:22:20.582 ******* 2026-02-02 05:56:03.404756 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404765 | orchestrator | 2026-02-02 05:56:03.404781 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 05:56:03.404791 | orchestrator | Monday 02 February 2026 05:55:53 +0000 (0:00:00.812) 0:22:21.395 ******* 2026-02-02 05:56:03.404800 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-02 05:56:03.404810 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-02 05:56:03.404820 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-02 05:56:03.404829 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404839 | orchestrator | 2026-02-02 05:56:03.404849 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 05:56:03.404858 | orchestrator | Monday 02 February 2026 05:55:54 +0000 (0:00:01.107) 0:22:22.502 ******* 2026-02-02 05:56:03.404867 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-02 05:56:03.404877 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-02 05:56:03.404887 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-02 05:56:03.404896 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404905 | orchestrator | 2026-02-02 05:56:03.404915 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 05:56:03.404924 | orchestrator | Monday 02 February 2026 05:55:56 +0000 (0:00:01.664) 0:22:24.167 ******* 2026-02-02 05:56:03.404934 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-02 05:56:03.404949 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-02 05:56:03.404959 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-02 05:56:03.404968 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.404978 | orchestrator | 2026-02-02 05:56:03.404988 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 05:56:03.404997 | orchestrator | Monday 02 February 2026 05:55:58 +0000 (0:00:01.421) 0:22:25.588 ******* 2026-02-02 05:56:03.405007 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.405016 | orchestrator | 2026-02-02 05:56:03.405026 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 05:56:03.405035 | orchestrator | Monday 02 February 2026 05:55:58 +0000 (0:00:00.885) 0:22:26.474 ******* 2026-02-02 05:56:03.405045 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-02 05:56:03.405055 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.405064 | orchestrator | 2026-02-02 05:56:03.405074 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 05:56:03.405084 | orchestrator | Monday 02 February 2026 05:55:59 +0000 (0:00:00.884) 0:22:27.359 ******* 2026-02-02 05:56:03.405093 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.405103 | orchestrator | 2026-02-02 05:56:03.405112 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-02 05:56:03.405122 | orchestrator | Monday 02 February 2026 05:56:00 +0000 (0:00:00.868) 0:22:28.228 ******* 2026-02-02 05:56:03.405131 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 05:56:03.405141 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 05:56:03.405150 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 05:56:03.405160 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.405169 | orchestrator | 2026-02-02 05:56:03.405179 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-02 05:56:03.405188 | orchestrator | Monday 02 February 2026 05:56:01 +0000 (0:00:01.101) 0:22:29.329 ******* 2026-02-02 05:56:03.405198 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:03.405208 | orchestrator | 2026-02-02 05:56:03.405217 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-02 05:56:03.405227 | orchestrator | Monday 02 February 2026 05:56:02 +0000 (0:00:00.799) 0:22:30.129 ******* 2026-02-02 05:56:03.405242 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:45.502305 | orchestrator | 2026-02-02 05:56:45.502415 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-02 05:56:45.502453 | orchestrator | Monday 02 February 2026 05:56:03 +0000 (0:00:00.845) 0:22:30.974 ******* 2026-02-02 05:56:45.502464 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:45.502475 | orchestrator | 2026-02-02 05:56:45.502485 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-02 05:56:45.502495 | orchestrator | Monday 02 February 2026 05:56:04 +0000 (0:00:00.808) 0:22:31.783 ******* 2026-02-02 05:56:45.502504 | orchestrator | skipping: [testbed-node-2] 2026-02-02 05:56:45.502514 | orchestrator | 2026-02-02 05:56:45.502523 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-02 05:56:45.502533 | orchestrator | 2026-02-02 05:56:45.502543 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-02 05:56:45.502552 | orchestrator | Monday 02 February 2026 05:56:05 +0000 (0:00:01.475) 0:22:33.259 ******* 2026-02-02 05:56:45.502562 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:56:45.502572 | orchestrator | 2026-02-02 05:56:45.502582 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-02 05:56:45.502639 | orchestrator | Monday 02 February 2026 05:56:18 +0000 (0:00:12.899) 0:22:46.158 ******* 2026-02-02 05:56:45.502650 | orchestrator | changed: [testbed-node-0] 2026-02-02 05:56:45.502659 | orchestrator | 2026-02-02 05:56:45.502669 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 05:56:45.502678 | orchestrator | Monday 02 February 2026 05:56:21 +0000 (0:00:02.441) 0:22:48.599 ******* 2026-02-02 05:56:45.502688 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-02 05:56:45.502697 | orchestrator | 2026-02-02 05:56:45.502707 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 05:56:45.502716 | orchestrator | Monday 02 February 2026 05:56:22 +0000 (0:00:01.332) 0:22:49.932 ******* 2026-02-02 05:56:45.502726 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:56:45.502736 | orchestrator | 2026-02-02 05:56:45.502746 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 05:56:45.502755 | orchestrator | Monday 02 February 2026 05:56:23 +0000 (0:00:01.425) 0:22:51.358 ******* 2026-02-02 05:56:45.502764 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:56:45.502774 | orchestrator | 2026-02-02 05:56:45.502783 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 05:56:45.502793 | orchestrator | Monday 02 February 2026 05:56:24 +0000 (0:00:01.194) 0:22:52.552 ******* 2026-02-02 05:56:45.502802 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:56:45.502811 | orchestrator | 2026-02-02 05:56:45.502821 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 05:56:45.502830 | orchestrator | Monday 02 February 2026 05:56:26 +0000 (0:00:01.565) 0:22:54.117 ******* 2026-02-02 05:56:45.502839 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:56:45.502849 | orchestrator | 2026-02-02 05:56:45.502861 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 05:56:45.502872 | orchestrator | Monday 02 February 2026 05:56:27 +0000 (0:00:01.191) 0:22:55.309 ******* 2026-02-02 05:56:45.502883 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:56:45.502911 | orchestrator | 2026-02-02 05:56:45.502922 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 05:56:45.502934 | orchestrator | Monday 02 February 2026 05:56:28 +0000 (0:00:01.154) 0:22:56.464 ******* 2026-02-02 05:56:45.502945 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:56:45.502956 | orchestrator | 2026-02-02 05:56:45.502967 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 05:56:45.502979 | orchestrator | Monday 02 February 2026 05:56:30 +0000 (0:00:01.238) 0:22:57.702 ******* 2026-02-02 05:56:45.502990 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:56:45.503001 | orchestrator | 2026-02-02 05:56:45.503026 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 05:56:45.503037 | orchestrator | Monday 02 February 2026 05:56:31 +0000 (0:00:01.150) 0:22:58.852 ******* 2026-02-02 05:56:45.503057 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:56:45.503068 | orchestrator | 2026-02-02 05:56:45.503079 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 05:56:45.503090 | orchestrator | Monday 02 February 2026 05:56:32 +0000 (0:00:01.139) 0:22:59.992 ******* 2026-02-02 05:56:45.503114 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:56:45.503125 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:56:45.503136 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:56:45.503148 | orchestrator | 2026-02-02 05:56:45.503159 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 05:56:45.503170 | orchestrator | Monday 02 February 2026 05:56:34 +0000 (0:00:02.198) 0:23:02.191 ******* 2026-02-02 05:56:45.503182 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:56:45.503193 | orchestrator | 2026-02-02 05:56:45.503204 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 05:56:45.503215 | orchestrator | Monday 02 February 2026 05:56:35 +0000 (0:00:01.307) 0:23:03.498 ******* 2026-02-02 05:56:45.503226 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:56:45.503236 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:56:45.503245 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:56:45.503255 | orchestrator | 2026-02-02 05:56:45.503264 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 05:56:45.503273 | orchestrator | Monday 02 February 2026 05:56:39 +0000 (0:00:03.280) 0:23:06.779 ******* 2026-02-02 05:56:45.503283 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 05:56:45.503293 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 05:56:45.503302 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 05:56:45.503312 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:56:45.503322 | orchestrator | 2026-02-02 05:56:45.503347 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 05:56:45.503357 | orchestrator | Monday 02 February 2026 05:56:41 +0000 (0:00:02.047) 0:23:08.826 ******* 2026-02-02 05:56:45.503369 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 05:56:45.503382 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 05:56:45.503392 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 05:56:45.503402 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:56:45.503411 | orchestrator | 2026-02-02 05:56:45.503421 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 05:56:45.503431 | orchestrator | Monday 02 February 2026 05:56:42 +0000 (0:00:01.757) 0:23:10.584 ******* 2026-02-02 05:56:45.503442 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:56:45.503455 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:56:45.503471 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 05:56:45.503481 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:56:45.503491 | orchestrator | 2026-02-02 05:56:45.503501 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 05:56:45.503515 | orchestrator | Monday 02 February 2026 05:56:44 +0000 (0:00:01.192) 0:23:11.776 ******* 2026-02-02 05:56:45.503527 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 05:56:36.777296', 'end': '2026-02-02 05:56:36.835658', 'delta': '0:00:00.058362', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 05:56:45.503540 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c530967d0aad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 05:56:37.375439', 'end': '2026-02-02 05:56:37.440385', 'delta': '0:00:00.064946', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c530967d0aad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 05:56:45.503557 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a68c96a70534', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 05:56:37.965939', 'end': '2026-02-02 05:56:38.022235', 'delta': '0:00:00.056296', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a68c96a70534'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 05:57:04.575049 | orchestrator | 2026-02-02 05:57:04.575159 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 05:57:04.575177 | orchestrator | Monday 02 February 2026 05:56:45 +0000 (0:00:01.295) 0:23:13.072 ******* 2026-02-02 05:57:04.575191 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:57:04.575205 | orchestrator | 2026-02-02 05:57:04.575217 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 05:57:04.575229 | orchestrator | Monday 02 February 2026 05:56:46 +0000 (0:00:01.286) 0:23:14.359 ******* 2026-02-02 05:57:04.575241 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:04.575279 | orchestrator | 2026-02-02 05:57:04.575290 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 05:57:04.575301 | orchestrator | Monday 02 February 2026 05:56:48 +0000 (0:00:01.276) 0:23:15.635 ******* 2026-02-02 05:57:04.575312 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:57:04.575322 | orchestrator | 2026-02-02 05:57:04.575332 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 05:57:04.575344 | orchestrator | Monday 02 February 2026 05:56:49 +0000 (0:00:01.113) 0:23:16.749 ******* 2026-02-02 05:57:04.575355 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:57:04.575366 | orchestrator | 2026-02-02 05:57:04.575377 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 05:57:04.575390 | orchestrator | Monday 02 February 2026 05:56:51 +0000 (0:00:02.067) 0:23:18.817 ******* 2026-02-02 05:57:04.575400 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:57:04.575410 | orchestrator | 2026-02-02 05:57:04.575421 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 05:57:04.575431 | orchestrator | Monday 02 February 2026 05:56:52 +0000 (0:00:01.185) 0:23:20.002 ******* 2026-02-02 05:57:04.575442 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:04.575453 | orchestrator | 2026-02-02 05:57:04.575465 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 05:57:04.575476 | orchestrator | Monday 02 February 2026 05:56:53 +0000 (0:00:01.160) 0:23:21.163 ******* 2026-02-02 05:57:04.575487 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:04.575498 | orchestrator | 2026-02-02 05:57:04.575510 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 05:57:04.575521 | orchestrator | Monday 02 February 2026 05:56:54 +0000 (0:00:01.226) 0:23:22.390 ******* 2026-02-02 05:57:04.575532 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:04.575544 | orchestrator | 2026-02-02 05:57:04.575555 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 05:57:04.575566 | orchestrator | Monday 02 February 2026 05:56:55 +0000 (0:00:01.191) 0:23:23.582 ******* 2026-02-02 05:57:04.575609 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:04.575622 | orchestrator | 2026-02-02 05:57:04.575645 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 05:57:04.575653 | orchestrator | Monday 02 February 2026 05:56:57 +0000 (0:00:01.224) 0:23:24.807 ******* 2026-02-02 05:57:04.575660 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:04.575669 | orchestrator | 2026-02-02 05:57:04.575676 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 05:57:04.575684 | orchestrator | Monday 02 February 2026 05:56:58 +0000 (0:00:01.253) 0:23:26.061 ******* 2026-02-02 05:57:04.575692 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:04.575700 | orchestrator | 2026-02-02 05:57:04.575708 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 05:57:04.575715 | orchestrator | Monday 02 February 2026 05:56:59 +0000 (0:00:01.217) 0:23:27.279 ******* 2026-02-02 05:57:04.575723 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:04.575731 | orchestrator | 2026-02-02 05:57:04.575739 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 05:57:04.575746 | orchestrator | Monday 02 February 2026 05:57:00 +0000 (0:00:01.154) 0:23:28.434 ******* 2026-02-02 05:57:04.575754 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:04.575761 | orchestrator | 2026-02-02 05:57:04.575770 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 05:57:04.575779 | orchestrator | Monday 02 February 2026 05:57:02 +0000 (0:00:01.256) 0:23:29.690 ******* 2026-02-02 05:57:04.575786 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:04.575794 | orchestrator | 2026-02-02 05:57:04.575802 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 05:57:04.575810 | orchestrator | Monday 02 February 2026 05:57:03 +0000 (0:00:01.159) 0:23:30.849 ******* 2026-02-02 05:57:04.575820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:57:04.575841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:57:04.575868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:57:04.575878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 05:57:04.575888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:57:04.575897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:57:04.575909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:57:04.575926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91f9e36e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 05:57:05.870941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:57:05.871030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 05:57:05.871041 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:05.871051 | orchestrator | 2026-02-02 05:57:05.871060 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 05:57:05.871068 | orchestrator | Monday 02 February 2026 05:57:04 +0000 (0:00:01.293) 0:23:32.143 ******* 2026-02-02 05:57:05.871078 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:57:05.871102 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:57:05.871111 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:57:05.871138 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:57:05.871162 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:57:05.871170 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:57:05.871177 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:57:05.871192 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91f9e36e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:57:05.871212 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:57:46.481971 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 05:57:46.482150 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.482171 | orchestrator | 2026-02-02 05:57:46.482185 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 05:57:46.482198 | orchestrator | Monday 02 February 2026 05:57:05 +0000 (0:00:01.296) 0:23:33.439 ******* 2026-02-02 05:57:46.482209 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:57:46.482221 | orchestrator | 2026-02-02 05:57:46.482232 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 05:57:46.482245 | orchestrator | Monday 02 February 2026 05:57:07 +0000 (0:00:01.567) 0:23:35.007 ******* 2026-02-02 05:57:46.482263 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:57:46.482279 | orchestrator | 2026-02-02 05:57:46.482290 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 05:57:46.482301 | orchestrator | Monday 02 February 2026 05:57:08 +0000 (0:00:01.148) 0:23:36.156 ******* 2026-02-02 05:57:46.482312 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:57:46.482322 | orchestrator | 2026-02-02 05:57:46.482333 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 05:57:46.482344 | orchestrator | Monday 02 February 2026 05:57:10 +0000 (0:00:01.486) 0:23:37.642 ******* 2026-02-02 05:57:46.482354 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.482365 | orchestrator | 2026-02-02 05:57:46.482393 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 05:57:46.482425 | orchestrator | Monday 02 February 2026 05:57:11 +0000 (0:00:01.162) 0:23:38.805 ******* 2026-02-02 05:57:46.482436 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.482447 | orchestrator | 2026-02-02 05:57:46.482458 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 05:57:46.482469 | orchestrator | Monday 02 February 2026 05:57:12 +0000 (0:00:01.230) 0:23:40.036 ******* 2026-02-02 05:57:46.482479 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.482490 | orchestrator | 2026-02-02 05:57:46.482501 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 05:57:46.482512 | orchestrator | Monday 02 February 2026 05:57:13 +0000 (0:00:01.139) 0:23:41.176 ******* 2026-02-02 05:57:46.482525 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:57:46.482568 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-02 05:57:46.482597 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-02 05:57:46.482618 | orchestrator | 2026-02-02 05:57:46.482635 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 05:57:46.482653 | orchestrator | Monday 02 February 2026 05:57:15 +0000 (0:00:02.175) 0:23:43.351 ******* 2026-02-02 05:57:46.482670 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 05:57:46.482689 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 05:57:46.482708 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 05:57:46.482726 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.482745 | orchestrator | 2026-02-02 05:57:46.482757 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 05:57:46.482768 | orchestrator | Monday 02 February 2026 05:57:16 +0000 (0:00:01.223) 0:23:44.574 ******* 2026-02-02 05:57:46.482779 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.482790 | orchestrator | 2026-02-02 05:57:46.482800 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 05:57:46.482811 | orchestrator | Monday 02 February 2026 05:57:18 +0000 (0:00:01.228) 0:23:45.803 ******* 2026-02-02 05:57:46.482822 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:57:46.482832 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:57:46.482844 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:57:46.482855 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 05:57:46.482866 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 05:57:46.482876 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 05:57:46.482887 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 05:57:46.482898 | orchestrator | 2026-02-02 05:57:46.482908 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 05:57:46.482920 | orchestrator | Monday 02 February 2026 05:57:20 +0000 (0:00:01.896) 0:23:47.699 ******* 2026-02-02 05:57:46.482930 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 05:57:46.482941 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 05:57:46.482952 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 05:57:46.482963 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 05:57:46.482993 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 05:57:46.483005 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 05:57:46.483016 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 05:57:46.483038 | orchestrator | 2026-02-02 05:57:46.483049 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 05:57:46.483059 | orchestrator | Monday 02 February 2026 05:57:22 +0000 (0:00:02.658) 0:23:50.358 ******* 2026-02-02 05:57:46.483070 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-02 05:57:46.483081 | orchestrator | 2026-02-02 05:57:46.483092 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 05:57:46.483103 | orchestrator | Monday 02 February 2026 05:57:23 +0000 (0:00:01.171) 0:23:51.530 ******* 2026-02-02 05:57:46.483114 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-02 05:57:46.483124 | orchestrator | 2026-02-02 05:57:46.483135 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 05:57:46.483146 | orchestrator | Monday 02 February 2026 05:57:25 +0000 (0:00:01.151) 0:23:52.681 ******* 2026-02-02 05:57:46.483156 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:57:46.483167 | orchestrator | 2026-02-02 05:57:46.483178 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 05:57:46.483188 | orchestrator | Monday 02 February 2026 05:57:26 +0000 (0:00:01.565) 0:23:54.247 ******* 2026-02-02 05:57:46.483199 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.483210 | orchestrator | 2026-02-02 05:57:46.483221 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 05:57:46.483231 | orchestrator | Monday 02 February 2026 05:57:27 +0000 (0:00:01.207) 0:23:55.454 ******* 2026-02-02 05:57:46.483242 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.483252 | orchestrator | 2026-02-02 05:57:46.483263 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 05:57:46.483281 | orchestrator | Monday 02 February 2026 05:57:29 +0000 (0:00:01.149) 0:23:56.603 ******* 2026-02-02 05:57:46.483292 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.483303 | orchestrator | 2026-02-02 05:57:46.483314 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 05:57:46.483324 | orchestrator | Monday 02 February 2026 05:57:30 +0000 (0:00:01.164) 0:23:57.768 ******* 2026-02-02 05:57:46.483335 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:57:46.483346 | orchestrator | 2026-02-02 05:57:46.483357 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 05:57:46.483367 | orchestrator | Monday 02 February 2026 05:57:31 +0000 (0:00:01.602) 0:23:59.371 ******* 2026-02-02 05:57:46.483378 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.483388 | orchestrator | 2026-02-02 05:57:46.483400 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 05:57:46.483410 | orchestrator | Monday 02 February 2026 05:57:32 +0000 (0:00:01.150) 0:24:00.522 ******* 2026-02-02 05:57:46.483421 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.483431 | orchestrator | 2026-02-02 05:57:46.483442 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 05:57:46.483453 | orchestrator | Monday 02 February 2026 05:57:34 +0000 (0:00:01.178) 0:24:01.701 ******* 2026-02-02 05:57:46.483464 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:57:46.483474 | orchestrator | 2026-02-02 05:57:46.483485 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 05:57:46.483496 | orchestrator | Monday 02 February 2026 05:57:35 +0000 (0:00:01.585) 0:24:03.286 ******* 2026-02-02 05:57:46.483507 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:57:46.483517 | orchestrator | 2026-02-02 05:57:46.483528 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 05:57:46.483539 | orchestrator | Monday 02 February 2026 05:57:37 +0000 (0:00:01.596) 0:24:04.883 ******* 2026-02-02 05:57:46.483603 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.483615 | orchestrator | 2026-02-02 05:57:46.483626 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 05:57:46.483636 | orchestrator | Monday 02 February 2026 05:57:38 +0000 (0:00:01.142) 0:24:06.025 ******* 2026-02-02 05:57:46.483657 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:57:46.483667 | orchestrator | 2026-02-02 05:57:46.483678 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 05:57:46.483689 | orchestrator | Monday 02 February 2026 05:57:39 +0000 (0:00:01.149) 0:24:07.175 ******* 2026-02-02 05:57:46.483700 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.483710 | orchestrator | 2026-02-02 05:57:46.483721 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 05:57:46.483732 | orchestrator | Monday 02 February 2026 05:57:40 +0000 (0:00:01.101) 0:24:08.277 ******* 2026-02-02 05:57:46.483742 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.483753 | orchestrator | 2026-02-02 05:57:46.483764 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 05:57:46.483775 | orchestrator | Monday 02 February 2026 05:57:41 +0000 (0:00:01.114) 0:24:09.391 ******* 2026-02-02 05:57:46.483785 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.483796 | orchestrator | 2026-02-02 05:57:46.483807 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 05:57:46.483817 | orchestrator | Monday 02 February 2026 05:57:42 +0000 (0:00:01.132) 0:24:10.524 ******* 2026-02-02 05:57:46.483828 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.483839 | orchestrator | 2026-02-02 05:57:46.483849 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 05:57:46.483860 | orchestrator | Monday 02 February 2026 05:57:44 +0000 (0:00:01.255) 0:24:11.780 ******* 2026-02-02 05:57:46.483871 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:57:46.483881 | orchestrator | 2026-02-02 05:57:46.483892 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 05:57:46.483903 | orchestrator | Monday 02 February 2026 05:57:45 +0000 (0:00:01.122) 0:24:12.902 ******* 2026-02-02 05:57:46.483922 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:58:35.833118 | orchestrator | 2026-02-02 05:58:35.833228 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 05:58:35.833244 | orchestrator | Monday 02 February 2026 05:57:46 +0000 (0:00:01.146) 0:24:14.049 ******* 2026-02-02 05:58:35.833256 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:58:35.833268 | orchestrator | 2026-02-02 05:58:35.833280 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 05:58:35.833291 | orchestrator | Monday 02 February 2026 05:57:47 +0000 (0:00:01.206) 0:24:15.256 ******* 2026-02-02 05:58:35.833302 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:58:35.833313 | orchestrator | 2026-02-02 05:58:35.833324 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 05:58:35.833335 | orchestrator | Monday 02 February 2026 05:57:48 +0000 (0:00:01.208) 0:24:16.464 ******* 2026-02-02 05:58:35.833346 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.833358 | orchestrator | 2026-02-02 05:58:35.833369 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 05:58:35.833381 | orchestrator | Monday 02 February 2026 05:57:50 +0000 (0:00:01.130) 0:24:17.595 ******* 2026-02-02 05:58:35.833392 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.833403 | orchestrator | 2026-02-02 05:58:35.833414 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 05:58:35.833425 | orchestrator | Monday 02 February 2026 05:57:51 +0000 (0:00:01.121) 0:24:18.716 ******* 2026-02-02 05:58:35.833436 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.833447 | orchestrator | 2026-02-02 05:58:35.833458 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 05:58:35.833469 | orchestrator | Monday 02 February 2026 05:57:52 +0000 (0:00:01.179) 0:24:19.896 ******* 2026-02-02 05:58:35.833480 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.833491 | orchestrator | 2026-02-02 05:58:35.833502 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 05:58:35.833599 | orchestrator | Monday 02 February 2026 05:57:53 +0000 (0:00:01.155) 0:24:21.051 ******* 2026-02-02 05:58:35.833636 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.833648 | orchestrator | 2026-02-02 05:58:35.833673 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 05:58:35.833687 | orchestrator | Monday 02 February 2026 05:57:54 +0000 (0:00:01.112) 0:24:22.164 ******* 2026-02-02 05:58:35.833699 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.833713 | orchestrator | 2026-02-02 05:58:35.833726 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 05:58:35.833739 | orchestrator | Monday 02 February 2026 05:57:55 +0000 (0:00:01.170) 0:24:23.334 ******* 2026-02-02 05:58:35.833751 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.833764 | orchestrator | 2026-02-02 05:58:35.833777 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 05:58:35.833791 | orchestrator | Monday 02 February 2026 05:57:56 +0000 (0:00:01.170) 0:24:24.505 ******* 2026-02-02 05:58:35.833803 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.833816 | orchestrator | 2026-02-02 05:58:35.833828 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 05:58:35.833841 | orchestrator | Monday 02 February 2026 05:57:58 +0000 (0:00:01.128) 0:24:25.634 ******* 2026-02-02 05:58:35.833853 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.833864 | orchestrator | 2026-02-02 05:58:35.833875 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 05:58:35.833885 | orchestrator | Monday 02 February 2026 05:57:59 +0000 (0:00:01.144) 0:24:26.779 ******* 2026-02-02 05:58:35.833896 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.833907 | orchestrator | 2026-02-02 05:58:35.833918 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 05:58:35.833929 | orchestrator | Monday 02 February 2026 05:58:00 +0000 (0:00:01.145) 0:24:27.924 ******* 2026-02-02 05:58:35.833939 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.833950 | orchestrator | 2026-02-02 05:58:35.833960 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 05:58:35.833971 | orchestrator | Monday 02 February 2026 05:58:01 +0000 (0:00:01.182) 0:24:29.107 ******* 2026-02-02 05:58:35.833982 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.833992 | orchestrator | 2026-02-02 05:58:35.834003 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 05:58:35.834074 | orchestrator | Monday 02 February 2026 05:58:02 +0000 (0:00:01.124) 0:24:30.232 ******* 2026-02-02 05:58:35.834088 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:58:35.834099 | orchestrator | 2026-02-02 05:58:35.834110 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 05:58:35.834121 | orchestrator | Monday 02 February 2026 05:58:04 +0000 (0:00:02.063) 0:24:32.296 ******* 2026-02-02 05:58:35.834132 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:58:35.834143 | orchestrator | 2026-02-02 05:58:35.834153 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 05:58:35.834164 | orchestrator | Monday 02 February 2026 05:58:07 +0000 (0:00:02.415) 0:24:34.711 ******* 2026-02-02 05:58:35.834175 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-02 05:58:35.834188 | orchestrator | 2026-02-02 05:58:35.834199 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 05:58:35.834209 | orchestrator | Monday 02 February 2026 05:58:08 +0000 (0:00:01.103) 0:24:35.814 ******* 2026-02-02 05:58:35.834220 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.834231 | orchestrator | 2026-02-02 05:58:35.834242 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 05:58:35.834252 | orchestrator | Monday 02 February 2026 05:58:09 +0000 (0:00:01.135) 0:24:36.950 ******* 2026-02-02 05:58:35.834263 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.834274 | orchestrator | 2026-02-02 05:58:35.834285 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 05:58:35.834304 | orchestrator | Monday 02 February 2026 05:58:10 +0000 (0:00:01.154) 0:24:38.104 ******* 2026-02-02 05:58:35.834333 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 05:58:35.834344 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 05:58:35.834355 | orchestrator | 2026-02-02 05:58:35.834366 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 05:58:35.834377 | orchestrator | Monday 02 February 2026 05:58:12 +0000 (0:00:01.851) 0:24:39.956 ******* 2026-02-02 05:58:35.834388 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:58:35.834399 | orchestrator | 2026-02-02 05:58:35.834410 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 05:58:35.834421 | orchestrator | Monday 02 February 2026 05:58:13 +0000 (0:00:01.453) 0:24:41.409 ******* 2026-02-02 05:58:35.834432 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.834443 | orchestrator | 2026-02-02 05:58:35.834453 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 05:58:35.834464 | orchestrator | Monday 02 February 2026 05:58:14 +0000 (0:00:01.136) 0:24:42.546 ******* 2026-02-02 05:58:35.834475 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.834486 | orchestrator | 2026-02-02 05:58:35.834496 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 05:58:35.834531 | orchestrator | Monday 02 February 2026 05:58:16 +0000 (0:00:01.136) 0:24:43.683 ******* 2026-02-02 05:58:35.834550 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.834570 | orchestrator | 2026-02-02 05:58:35.834588 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 05:58:35.834604 | orchestrator | Monday 02 February 2026 05:58:17 +0000 (0:00:01.156) 0:24:44.840 ******* 2026-02-02 05:58:35.834614 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-02 05:58:35.834625 | orchestrator | 2026-02-02 05:58:35.834636 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 05:58:35.834646 | orchestrator | Monday 02 February 2026 05:58:18 +0000 (0:00:01.199) 0:24:46.039 ******* 2026-02-02 05:58:35.834657 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:58:35.834668 | orchestrator | 2026-02-02 05:58:35.834685 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 05:58:35.834696 | orchestrator | Monday 02 February 2026 05:58:20 +0000 (0:00:01.780) 0:24:47.819 ******* 2026-02-02 05:58:35.834707 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 05:58:35.834717 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 05:58:35.834728 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 05:58:35.834739 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.834749 | orchestrator | 2026-02-02 05:58:35.834760 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 05:58:35.834771 | orchestrator | Monday 02 February 2026 05:58:21 +0000 (0:00:01.138) 0:24:48.958 ******* 2026-02-02 05:58:35.834781 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.834792 | orchestrator | 2026-02-02 05:58:35.834803 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 05:58:35.834813 | orchestrator | Monday 02 February 2026 05:58:22 +0000 (0:00:01.117) 0:24:50.075 ******* 2026-02-02 05:58:35.834824 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.834835 | orchestrator | 2026-02-02 05:58:35.834845 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 05:58:35.834856 | orchestrator | Monday 02 February 2026 05:58:23 +0000 (0:00:01.177) 0:24:51.253 ******* 2026-02-02 05:58:35.834867 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.834877 | orchestrator | 2026-02-02 05:58:35.834888 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 05:58:35.834906 | orchestrator | Monday 02 February 2026 05:58:24 +0000 (0:00:01.136) 0:24:52.390 ******* 2026-02-02 05:58:35.834917 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.834927 | orchestrator | 2026-02-02 05:58:35.834938 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 05:58:35.834949 | orchestrator | Monday 02 February 2026 05:58:25 +0000 (0:00:01.169) 0:24:53.560 ******* 2026-02-02 05:58:35.834959 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.834970 | orchestrator | 2026-02-02 05:58:35.834981 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 05:58:35.834991 | orchestrator | Monday 02 February 2026 05:58:27 +0000 (0:00:01.145) 0:24:54.705 ******* 2026-02-02 05:58:35.835002 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:58:35.835013 | orchestrator | 2026-02-02 05:58:35.835023 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 05:58:35.835034 | orchestrator | Monday 02 February 2026 05:58:29 +0000 (0:00:02.615) 0:24:57.320 ******* 2026-02-02 05:58:35.835045 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:58:35.835055 | orchestrator | 2026-02-02 05:58:35.835066 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 05:58:35.835077 | orchestrator | Monday 02 February 2026 05:58:30 +0000 (0:00:01.180) 0:24:58.501 ******* 2026-02-02 05:58:35.835087 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-02 05:58:35.835098 | orchestrator | 2026-02-02 05:58:35.835108 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 05:58:35.835119 | orchestrator | Monday 02 February 2026 05:58:32 +0000 (0:00:01.374) 0:24:59.875 ******* 2026-02-02 05:58:35.835130 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.835141 | orchestrator | 2026-02-02 05:58:35.835151 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 05:58:35.835162 | orchestrator | Monday 02 February 2026 05:58:33 +0000 (0:00:01.199) 0:25:01.075 ******* 2026-02-02 05:58:35.835172 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.835183 | orchestrator | 2026-02-02 05:58:35.835194 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 05:58:35.835205 | orchestrator | Monday 02 February 2026 05:58:34 +0000 (0:00:01.156) 0:25:02.231 ******* 2026-02-02 05:58:35.835215 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:58:35.835226 | orchestrator | 2026-02-02 05:58:35.835245 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 05:59:19.653800 | orchestrator | Monday 02 February 2026 05:58:35 +0000 (0:00:01.169) 0:25:03.401 ******* 2026-02-02 05:59:19.653919 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.653938 | orchestrator | 2026-02-02 05:59:19.653952 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 05:59:19.653963 | orchestrator | Monday 02 February 2026 05:58:36 +0000 (0:00:01.146) 0:25:04.547 ******* 2026-02-02 05:59:19.653975 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.653986 | orchestrator | 2026-02-02 05:59:19.653997 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 05:59:19.654008 | orchestrator | Monday 02 February 2026 05:58:38 +0000 (0:00:01.157) 0:25:05.705 ******* 2026-02-02 05:59:19.654090 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.654103 | orchestrator | 2026-02-02 05:59:19.654114 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 05:59:19.654135 | orchestrator | Monday 02 February 2026 05:58:39 +0000 (0:00:01.190) 0:25:06.895 ******* 2026-02-02 05:59:19.654146 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.654157 | orchestrator | 2026-02-02 05:59:19.654168 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 05:59:19.654179 | orchestrator | Monday 02 February 2026 05:58:40 +0000 (0:00:01.245) 0:25:08.141 ******* 2026-02-02 05:59:19.654190 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.654201 | orchestrator | 2026-02-02 05:59:19.654213 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 05:59:19.654248 | orchestrator | Monday 02 February 2026 05:58:41 +0000 (0:00:01.194) 0:25:09.336 ******* 2026-02-02 05:59:19.654260 | orchestrator | ok: [testbed-node-0] 2026-02-02 05:59:19.654272 | orchestrator | 2026-02-02 05:59:19.654283 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 05:59:19.654293 | orchestrator | Monday 02 February 2026 05:58:42 +0000 (0:00:01.130) 0:25:10.466 ******* 2026-02-02 05:59:19.654305 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-02 05:59:19.654316 | orchestrator | 2026-02-02 05:59:19.654342 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 05:59:19.654355 | orchestrator | Monday 02 February 2026 05:58:44 +0000 (0:00:01.134) 0:25:11.600 ******* 2026-02-02 05:59:19.654368 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-02 05:59:19.654381 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-02 05:59:19.654394 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-02 05:59:19.654406 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-02 05:59:19.654418 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-02 05:59:19.654431 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-02 05:59:19.654444 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-02 05:59:19.654457 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-02 05:59:19.654468 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 05:59:19.654501 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 05:59:19.654513 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 05:59:19.654524 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 05:59:19.654535 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 05:59:19.654547 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 05:59:19.654558 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-02 05:59:19.654568 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-02 05:59:19.654579 | orchestrator | 2026-02-02 05:59:19.654590 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 05:59:19.654601 | orchestrator | Monday 02 February 2026 05:58:51 +0000 (0:00:07.058) 0:25:18.658 ******* 2026-02-02 05:59:19.654612 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.654623 | orchestrator | 2026-02-02 05:59:19.654634 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 05:59:19.654659 | orchestrator | Monday 02 February 2026 05:58:52 +0000 (0:00:01.089) 0:25:19.748 ******* 2026-02-02 05:59:19.654670 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.654692 | orchestrator | 2026-02-02 05:59:19.654703 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 05:59:19.654714 | orchestrator | Monday 02 February 2026 05:58:53 +0000 (0:00:01.165) 0:25:20.913 ******* 2026-02-02 05:59:19.654725 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.654736 | orchestrator | 2026-02-02 05:59:19.654746 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 05:59:19.654757 | orchestrator | Monday 02 February 2026 05:58:54 +0000 (0:00:01.156) 0:25:22.071 ******* 2026-02-02 05:59:19.654767 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.654778 | orchestrator | 2026-02-02 05:59:19.654789 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 05:59:19.654800 | orchestrator | Monday 02 February 2026 05:58:55 +0000 (0:00:01.133) 0:25:23.205 ******* 2026-02-02 05:59:19.654810 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.654821 | orchestrator | 2026-02-02 05:59:19.654832 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 05:59:19.654851 | orchestrator | Monday 02 February 2026 05:58:56 +0000 (0:00:01.189) 0:25:24.394 ******* 2026-02-02 05:59:19.654862 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.654873 | orchestrator | 2026-02-02 05:59:19.654884 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 05:59:19.654894 | orchestrator | Monday 02 February 2026 05:58:57 +0000 (0:00:01.152) 0:25:25.547 ******* 2026-02-02 05:59:19.654905 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.654916 | orchestrator | 2026-02-02 05:59:19.654945 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 05:59:19.654957 | orchestrator | Monday 02 February 2026 05:58:59 +0000 (0:00:01.123) 0:25:26.671 ******* 2026-02-02 05:59:19.654967 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.654978 | orchestrator | 2026-02-02 05:59:19.654989 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 05:59:19.655000 | orchestrator | Monday 02 February 2026 05:59:00 +0000 (0:00:01.207) 0:25:27.878 ******* 2026-02-02 05:59:19.655010 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.655021 | orchestrator | 2026-02-02 05:59:19.655031 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 05:59:19.655042 | orchestrator | Monday 02 February 2026 05:59:01 +0000 (0:00:01.133) 0:25:29.011 ******* 2026-02-02 05:59:19.655053 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.655063 | orchestrator | 2026-02-02 05:59:19.655074 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 05:59:19.655085 | orchestrator | Monday 02 February 2026 05:59:02 +0000 (0:00:01.160) 0:25:30.172 ******* 2026-02-02 05:59:19.655095 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.655106 | orchestrator | 2026-02-02 05:59:19.655190 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 05:59:19.655201 | orchestrator | Monday 02 February 2026 05:59:03 +0000 (0:00:01.110) 0:25:31.283 ******* 2026-02-02 05:59:19.655212 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.655223 | orchestrator | 2026-02-02 05:59:19.655234 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 05:59:19.655245 | orchestrator | Monday 02 February 2026 05:59:04 +0000 (0:00:01.110) 0:25:32.394 ******* 2026-02-02 05:59:19.655256 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.655266 | orchestrator | 2026-02-02 05:59:19.655277 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 05:59:19.655288 | orchestrator | Monday 02 February 2026 05:59:06 +0000 (0:00:01.236) 0:25:33.631 ******* 2026-02-02 05:59:19.655299 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.655309 | orchestrator | 2026-02-02 05:59:19.655333 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 05:59:19.655354 | orchestrator | Monday 02 February 2026 05:59:07 +0000 (0:00:01.238) 0:25:34.870 ******* 2026-02-02 05:59:19.655374 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.655393 | orchestrator | 2026-02-02 05:59:19.655412 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 05:59:19.655431 | orchestrator | Monday 02 February 2026 05:59:08 +0000 (0:00:01.268) 0:25:36.138 ******* 2026-02-02 05:59:19.655450 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.655470 | orchestrator | 2026-02-02 05:59:19.655514 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 05:59:19.655534 | orchestrator | Monday 02 February 2026 05:59:09 +0000 (0:00:01.133) 0:25:37.272 ******* 2026-02-02 05:59:19.655552 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.655572 | orchestrator | 2026-02-02 05:59:19.655592 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 05:59:19.655613 | orchestrator | Monday 02 February 2026 05:59:10 +0000 (0:00:01.145) 0:25:38.418 ******* 2026-02-02 05:59:19.655631 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.655668 | orchestrator | 2026-02-02 05:59:19.655689 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 05:59:19.655710 | orchestrator | Monday 02 February 2026 05:59:12 +0000 (0:00:01.223) 0:25:39.642 ******* 2026-02-02 05:59:19.655728 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.655748 | orchestrator | 2026-02-02 05:59:19.655768 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 05:59:19.655786 | orchestrator | Monday 02 February 2026 05:59:13 +0000 (0:00:01.107) 0:25:40.750 ******* 2026-02-02 05:59:19.655806 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.655824 | orchestrator | 2026-02-02 05:59:19.655844 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 05:59:19.655864 | orchestrator | Monday 02 February 2026 05:59:14 +0000 (0:00:01.173) 0:25:41.924 ******* 2026-02-02 05:59:19.655886 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.655906 | orchestrator | 2026-02-02 05:59:19.655927 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 05:59:19.655948 | orchestrator | Monday 02 February 2026 05:59:15 +0000 (0:00:01.124) 0:25:43.048 ******* 2026-02-02 05:59:19.655967 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 05:59:19.655987 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 05:59:19.656008 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 05:59:19.656033 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.656052 | orchestrator | 2026-02-02 05:59:19.656071 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 05:59:19.656093 | orchestrator | Monday 02 February 2026 05:59:16 +0000 (0:00:01.390) 0:25:44.439 ******* 2026-02-02 05:59:19.656112 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 05:59:19.656133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 05:59:19.656153 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 05:59:19.656173 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.656193 | orchestrator | 2026-02-02 05:59:19.656213 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 05:59:19.656234 | orchestrator | Monday 02 February 2026 05:59:18 +0000 (0:00:01.432) 0:25:45.872 ******* 2026-02-02 05:59:19.656253 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 05:59:19.656275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 05:59:19.656295 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-02 05:59:19.656314 | orchestrator | skipping: [testbed-node-0] 2026-02-02 05:59:19.656327 | orchestrator | 2026-02-02 05:59:19.656356 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:00:33.535511 | orchestrator | Monday 02 February 2026 05:59:19 +0000 (0:00:01.351) 0:25:47.223 ******* 2026-02-02 06:00:33.535628 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:00:33.535646 | orchestrator | 2026-02-02 06:00:33.535659 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:00:33.535671 | orchestrator | Monday 02 February 2026 05:59:20 +0000 (0:00:01.135) 0:25:48.358 ******* 2026-02-02 06:00:33.535682 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-02 06:00:33.535694 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:00:33.535705 | orchestrator | 2026-02-02 06:00:33.535716 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 06:00:33.535727 | orchestrator | Monday 02 February 2026 05:59:22 +0000 (0:00:01.386) 0:25:49.745 ******* 2026-02-02 06:00:33.535738 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:00:33.535749 | orchestrator | 2026-02-02 06:00:33.535760 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-02 06:00:33.535771 | orchestrator | Monday 02 February 2026 05:59:23 +0000 (0:00:01.804) 0:25:51.549 ******* 2026-02-02 06:00:33.535781 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 06:00:33.535817 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:00:33.535829 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:00:33.535839 | orchestrator | 2026-02-02 06:00:33.535850 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-02 06:00:33.535861 | orchestrator | Monday 02 February 2026 05:59:25 +0000 (0:00:01.695) 0:25:53.245 ******* 2026-02-02 06:00:33.535871 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-02-02 06:00:33.535882 | orchestrator | 2026-02-02 06:00:33.535893 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-02 06:00:33.535903 | orchestrator | Monday 02 February 2026 05:59:27 +0000 (0:00:01.469) 0:25:54.715 ******* 2026-02-02 06:00:33.535914 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:00:33.535925 | orchestrator | 2026-02-02 06:00:33.535936 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-02 06:00:33.535947 | orchestrator | Monday 02 February 2026 05:59:28 +0000 (0:00:01.474) 0:25:56.189 ******* 2026-02-02 06:00:33.535957 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:00:33.535968 | orchestrator | 2026-02-02 06:00:33.535979 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-02 06:00:33.535992 | orchestrator | Monday 02 February 2026 05:59:29 +0000 (0:00:01.175) 0:25:57.365 ******* 2026-02-02 06:00:33.536005 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-02 06:00:33.536018 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-02 06:00:33.536031 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-02 06:00:33.536045 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-02 06:00:33.536057 | orchestrator | 2026-02-02 06:00:33.536071 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-02 06:00:33.536083 | orchestrator | Monday 02 February 2026 05:59:37 +0000 (0:00:07.522) 0:26:04.888 ******* 2026-02-02 06:00:33.536097 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:00:33.536110 | orchestrator | 2026-02-02 06:00:33.536138 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-02 06:00:33.536162 | orchestrator | Monday 02 February 2026 05:59:38 +0000 (0:00:01.226) 0:26:06.115 ******* 2026-02-02 06:00:33.536175 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-02 06:00:33.536188 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 06:00:33.536201 | orchestrator | 2026-02-02 06:00:33.536213 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:00:33.536226 | orchestrator | Monday 02 February 2026 05:59:41 +0000 (0:00:03.151) 0:26:09.267 ******* 2026-02-02 06:00:33.536238 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-02 06:00:33.536251 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-02 06:00:33.536264 | orchestrator | 2026-02-02 06:00:33.536276 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-02 06:00:33.536338 | orchestrator | Monday 02 February 2026 05:59:43 +0000 (0:00:02.086) 0:26:11.353 ******* 2026-02-02 06:00:33.536353 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:00:33.536366 | orchestrator | 2026-02-02 06:00:33.536378 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-02 06:00:33.536388 | orchestrator | Monday 02 February 2026 05:59:45 +0000 (0:00:01.632) 0:26:12.985 ******* 2026-02-02 06:00:33.536399 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:00:33.536411 | orchestrator | 2026-02-02 06:00:33.536422 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-02 06:00:33.536464 | orchestrator | Monday 02 February 2026 05:59:46 +0000 (0:00:01.202) 0:26:14.187 ******* 2026-02-02 06:00:33.536476 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:00:33.536486 | orchestrator | 2026-02-02 06:00:33.536497 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-02 06:00:33.536508 | orchestrator | Monday 02 February 2026 05:59:47 +0000 (0:00:01.240) 0:26:15.428 ******* 2026-02-02 06:00:33.536527 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-02-02 06:00:33.536538 | orchestrator | 2026-02-02 06:00:33.536548 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-02 06:00:33.536559 | orchestrator | Monday 02 February 2026 05:59:49 +0000 (0:00:01.527) 0:26:16.955 ******* 2026-02-02 06:00:33.536570 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:00:33.536580 | orchestrator | 2026-02-02 06:00:33.536591 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-02 06:00:33.536602 | orchestrator | Monday 02 February 2026 05:59:50 +0000 (0:00:01.195) 0:26:18.151 ******* 2026-02-02 06:00:33.536613 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:00:33.536624 | orchestrator | 2026-02-02 06:00:33.536635 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-02 06:00:33.536664 | orchestrator | Monday 02 February 2026 05:59:51 +0000 (0:00:01.175) 0:26:19.327 ******* 2026-02-02 06:00:33.536676 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-02-02 06:00:33.536687 | orchestrator | 2026-02-02 06:00:33.536698 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-02 06:00:33.536708 | orchestrator | Monday 02 February 2026 05:59:53 +0000 (0:00:01.486) 0:26:20.814 ******* 2026-02-02 06:00:33.536719 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:00:33.536730 | orchestrator | 2026-02-02 06:00:33.536741 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-02 06:00:33.536752 | orchestrator | Monday 02 February 2026 05:59:55 +0000 (0:00:02.075) 0:26:22.890 ******* 2026-02-02 06:00:33.536762 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:00:33.536773 | orchestrator | 2026-02-02 06:00:33.536784 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-02 06:00:33.536795 | orchestrator | Monday 02 February 2026 05:59:57 +0000 (0:00:01.952) 0:26:24.842 ******* 2026-02-02 06:00:33.536805 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:00:33.536816 | orchestrator | 2026-02-02 06:00:33.536827 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-02 06:00:33.536838 | orchestrator | Monday 02 February 2026 05:59:59 +0000 (0:00:02.550) 0:26:27.392 ******* 2026-02-02 06:00:33.536848 | orchestrator | changed: [testbed-node-0] 2026-02-02 06:00:33.536859 | orchestrator | 2026-02-02 06:00:33.536870 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-02 06:00:33.536880 | orchestrator | Monday 02 February 2026 06:00:03 +0000 (0:00:03.856) 0:26:31.249 ******* 2026-02-02 06:00:33.536891 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:00:33.536902 | orchestrator | 2026-02-02 06:00:33.536913 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-02 06:00:33.536923 | orchestrator | 2026-02-02 06:00:33.536934 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-02 06:00:33.536945 | orchestrator | Monday 02 February 2026 06:00:04 +0000 (0:00:01.333) 0:26:32.583 ******* 2026-02-02 06:00:33.536961 | orchestrator | changed: [testbed-node-1] 2026-02-02 06:00:33.536972 | orchestrator | 2026-02-02 06:00:33.536983 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-02 06:00:33.536994 | orchestrator | Monday 02 February 2026 06:00:17 +0000 (0:00:12.529) 0:26:45.113 ******* 2026-02-02 06:00:33.537004 | orchestrator | changed: [testbed-node-1] 2026-02-02 06:00:33.537015 | orchestrator | 2026-02-02 06:00:33.537026 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 06:00:33.537037 | orchestrator | Monday 02 February 2026 06:00:19 +0000 (0:00:02.003) 0:26:47.117 ******* 2026-02-02 06:00:33.537047 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-02 06:00:33.537058 | orchestrator | 2026-02-02 06:00:33.537069 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 06:00:33.537079 | orchestrator | Monday 02 February 2026 06:00:20 +0000 (0:00:01.161) 0:26:48.279 ******* 2026-02-02 06:00:33.537096 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:00:33.537107 | orchestrator | 2026-02-02 06:00:33.537118 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 06:00:33.537129 | orchestrator | Monday 02 February 2026 06:00:22 +0000 (0:00:01.422) 0:26:49.702 ******* 2026-02-02 06:00:33.537139 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:00:33.537150 | orchestrator | 2026-02-02 06:00:33.537161 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:00:33.537171 | orchestrator | Monday 02 February 2026 06:00:23 +0000 (0:00:01.125) 0:26:50.828 ******* 2026-02-02 06:00:33.537182 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:00:33.537192 | orchestrator | 2026-02-02 06:00:33.537203 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:00:33.537214 | orchestrator | Monday 02 February 2026 06:00:24 +0000 (0:00:01.432) 0:26:52.260 ******* 2026-02-02 06:00:33.537224 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:00:33.537235 | orchestrator | 2026-02-02 06:00:33.537246 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 06:00:33.537256 | orchestrator | Monday 02 February 2026 06:00:25 +0000 (0:00:01.136) 0:26:53.397 ******* 2026-02-02 06:00:33.537267 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:00:33.537277 | orchestrator | 2026-02-02 06:00:33.537288 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 06:00:33.537299 | orchestrator | Monday 02 February 2026 06:00:26 +0000 (0:00:01.168) 0:26:54.566 ******* 2026-02-02 06:00:33.537309 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:00:33.537320 | orchestrator | 2026-02-02 06:00:33.537330 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 06:00:33.537341 | orchestrator | Monday 02 February 2026 06:00:28 +0000 (0:00:01.146) 0:26:55.712 ******* 2026-02-02 06:00:33.537351 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:00:33.537362 | orchestrator | 2026-02-02 06:00:33.537372 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 06:00:33.537383 | orchestrator | Monday 02 February 2026 06:00:29 +0000 (0:00:01.176) 0:26:56.888 ******* 2026-02-02 06:00:33.537394 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:00:33.537405 | orchestrator | 2026-02-02 06:00:33.537415 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 06:00:33.537426 | orchestrator | Monday 02 February 2026 06:00:30 +0000 (0:00:01.235) 0:26:58.124 ******* 2026-02-02 06:00:33.537457 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:00:33.537469 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 06:00:33.537479 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:00:33.537490 | orchestrator | 2026-02-02 06:00:33.537501 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 06:00:33.537511 | orchestrator | Monday 02 February 2026 06:00:32 +0000 (0:00:01.694) 0:26:59.819 ******* 2026-02-02 06:00:33.537522 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:00:33.537533 | orchestrator | 2026-02-02 06:00:33.537543 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 06:00:33.537562 | orchestrator | Monday 02 February 2026 06:00:33 +0000 (0:00:01.284) 0:27:01.103 ******* 2026-02-02 06:00:57.821243 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:00:57.821355 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 06:00:57.821367 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:00:57.821375 | orchestrator | 2026-02-02 06:00:57.821383 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 06:00:57.821397 | orchestrator | Monday 02 February 2026 06:00:36 +0000 (0:00:02.919) 0:27:04.023 ******* 2026-02-02 06:00:57.821411 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 06:00:57.821529 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 06:00:57.821546 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 06:00:57.821557 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:00:57.821565 | orchestrator | 2026-02-02 06:00:57.821572 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 06:00:57.821579 | orchestrator | Monday 02 February 2026 06:00:37 +0000 (0:00:01.388) 0:27:05.412 ******* 2026-02-02 06:00:57.821587 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 06:00:57.821597 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 06:00:57.821616 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 06:00:57.821623 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:00:57.821630 | orchestrator | 2026-02-02 06:00:57.821637 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 06:00:57.821644 | orchestrator | Monday 02 February 2026 06:00:39 +0000 (0:00:01.589) 0:27:07.001 ******* 2026-02-02 06:00:57.821652 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:00:57.821661 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:00:57.821668 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:00:57.821675 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:00:57.821682 | orchestrator | 2026-02-02 06:00:57.821688 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 06:00:57.821695 | orchestrator | Monday 02 February 2026 06:00:40 +0000 (0:00:01.188) 0:27:08.190 ******* 2026-02-02 06:00:57.821704 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 06:00:34.056227', 'end': '2026-02-02 06:00:34.109437', 'delta': '0:00:00.053210', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 06:00:57.821735 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'c530967d0aad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 06:00:34.661146', 'end': '2026-02-02 06:00:34.702247', 'delta': '0:00:00.041101', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c530967d0aad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 06:00:57.821747 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'a68c96a70534', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 06:00:35.238972', 'end': '2026-02-02 06:00:35.288313', 'delta': '0:00:00.049341', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a68c96a70534'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 06:00:57.821754 | orchestrator | 2026-02-02 06:00:57.821761 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 06:00:57.821767 | orchestrator | Monday 02 February 2026 06:00:41 +0000 (0:00:01.206) 0:27:09.396 ******* 2026-02-02 06:00:57.821774 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:00:57.821781 | orchestrator | 2026-02-02 06:00:57.821787 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 06:00:57.821794 | orchestrator | Monday 02 February 2026 06:00:43 +0000 (0:00:01.251) 0:27:10.648 ******* 2026-02-02 06:00:57.821801 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:00:57.821810 | orchestrator | 2026-02-02 06:00:57.821821 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 06:00:57.821832 | orchestrator | Monday 02 February 2026 06:00:44 +0000 (0:00:01.222) 0:27:11.871 ******* 2026-02-02 06:00:57.821842 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:00:57.821851 | orchestrator | 2026-02-02 06:00:57.821861 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 06:00:57.821871 | orchestrator | Monday 02 February 2026 06:00:45 +0000 (0:00:01.207) 0:27:13.078 ******* 2026-02-02 06:00:57.821880 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:00:57.821889 | orchestrator | 2026-02-02 06:00:57.821899 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:00:57.821908 | orchestrator | Monday 02 February 2026 06:00:47 +0000 (0:00:01.973) 0:27:15.051 ******* 2026-02-02 06:00:57.821918 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:00:57.821928 | orchestrator | 2026-02-02 06:00:57.821938 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 06:00:57.821948 | orchestrator | Monday 02 February 2026 06:00:48 +0000 (0:00:01.127) 0:27:16.179 ******* 2026-02-02 06:00:57.821957 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:00:57.821967 | orchestrator | 2026-02-02 06:00:57.821976 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 06:00:57.821986 | orchestrator | Monday 02 February 2026 06:00:49 +0000 (0:00:01.252) 0:27:17.431 ******* 2026-02-02 06:00:57.821995 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:00:57.822005 | orchestrator | 2026-02-02 06:00:57.822057 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:00:57.822069 | orchestrator | Monday 02 February 2026 06:00:51 +0000 (0:00:01.215) 0:27:18.647 ******* 2026-02-02 06:00:57.822079 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:00:57.822097 | orchestrator | 2026-02-02 06:00:57.822109 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 06:00:57.822121 | orchestrator | Monday 02 February 2026 06:00:52 +0000 (0:00:01.155) 0:27:19.802 ******* 2026-02-02 06:00:57.822138 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:00:57.822148 | orchestrator | 2026-02-02 06:00:57.822154 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 06:00:57.822161 | orchestrator | Monday 02 February 2026 06:00:53 +0000 (0:00:01.113) 0:27:20.916 ******* 2026-02-02 06:00:57.822168 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:00:57.822175 | orchestrator | 2026-02-02 06:00:57.822181 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 06:00:57.822188 | orchestrator | Monday 02 February 2026 06:00:54 +0000 (0:00:01.101) 0:27:22.018 ******* 2026-02-02 06:00:57.822194 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:00:57.822201 | orchestrator | 2026-02-02 06:00:57.822207 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 06:00:57.822214 | orchestrator | Monday 02 February 2026 06:00:55 +0000 (0:00:01.101) 0:27:23.120 ******* 2026-02-02 06:00:57.822220 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:00:57.822227 | orchestrator | 2026-02-02 06:00:57.822234 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 06:00:57.822240 | orchestrator | Monday 02 February 2026 06:00:56 +0000 (0:00:01.144) 0:27:24.264 ******* 2026-02-02 06:00:57.822247 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:00:57.822253 | orchestrator | 2026-02-02 06:00:57.822260 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 06:00:57.822277 | orchestrator | Monday 02 February 2026 06:00:57 +0000 (0:00:01.125) 0:27:25.390 ******* 2026-02-02 06:01:01.434402 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:01.434552 | orchestrator | 2026-02-02 06:01:01.434570 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 06:01:01.434583 | orchestrator | Monday 02 February 2026 06:00:58 +0000 (0:00:01.134) 0:27:26.524 ******* 2026-02-02 06:01:01.434596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:01:01.434612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:01:01.434642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:01:01.434656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 06:01:01.434699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:01:01.434711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:01:01.434722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:01:01.434765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2343887', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:01:01.434780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:01:01.434792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:01:01.434811 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:01.434822 | orchestrator | 2026-02-02 06:01:01.434834 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 06:01:01.434845 | orchestrator | Monday 02 February 2026 06:01:00 +0000 (0:00:01.261) 0:27:27.786 ******* 2026-02-02 06:01:01.434857 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:01:01.434870 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:01:01.434889 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:01:12.086127 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-27-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:01:12.086296 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:01:12.086360 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:01:12.086382 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:01:12.086499 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2343887', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2343887-7bc1-4466-877e-c2a88f331c7f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:01:12.086533 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:01:12.086567 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:01:12.086587 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:12.086610 | orchestrator | 2026-02-02 06:01:12.086631 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 06:01:12.086651 | orchestrator | Monday 02 February 2026 06:01:01 +0000 (0:00:01.223) 0:27:29.009 ******* 2026-02-02 06:01:12.086670 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:01:12.086690 | orchestrator | 2026-02-02 06:01:12.086711 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 06:01:12.086731 | orchestrator | Monday 02 February 2026 06:01:02 +0000 (0:00:01.514) 0:27:30.524 ******* 2026-02-02 06:01:12.086752 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:01:12.086766 | orchestrator | 2026-02-02 06:01:12.086780 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:01:12.086793 | orchestrator | Monday 02 February 2026 06:01:04 +0000 (0:00:01.091) 0:27:31.616 ******* 2026-02-02 06:01:12.086805 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:01:12.086819 | orchestrator | 2026-02-02 06:01:12.086832 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:01:12.086845 | orchestrator | Monday 02 February 2026 06:01:05 +0000 (0:00:01.589) 0:27:33.206 ******* 2026-02-02 06:01:12.086858 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:12.086871 | orchestrator | 2026-02-02 06:01:12.086884 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:01:12.086898 | orchestrator | Monday 02 February 2026 06:01:06 +0000 (0:00:01.147) 0:27:34.353 ******* 2026-02-02 06:01:12.086913 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:12.086932 | orchestrator | 2026-02-02 06:01:12.086950 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:01:12.086967 | orchestrator | Monday 02 February 2026 06:01:08 +0000 (0:00:01.297) 0:27:35.650 ******* 2026-02-02 06:01:12.086985 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:12.087004 | orchestrator | 2026-02-02 06:01:12.087023 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 06:01:12.087041 | orchestrator | Monday 02 February 2026 06:01:09 +0000 (0:00:01.156) 0:27:36.807 ******* 2026-02-02 06:01:12.087057 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-02 06:01:12.087069 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 06:01:12.087079 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-02 06:01:12.087090 | orchestrator | 2026-02-02 06:01:12.087101 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 06:01:12.087112 | orchestrator | Monday 02 February 2026 06:01:10 +0000 (0:00:01.698) 0:27:38.505 ******* 2026-02-02 06:01:12.087122 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-02 06:01:12.087133 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-02 06:01:12.087144 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-02 06:01:12.087155 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:12.087166 | orchestrator | 2026-02-02 06:01:12.087188 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 06:01:49.177999 | orchestrator | Monday 02 February 2026 06:01:12 +0000 (0:00:01.148) 0:27:39.654 ******* 2026-02-02 06:01:49.178269 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.178298 | orchestrator | 2026-02-02 06:01:49.178312 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 06:01:49.178324 | orchestrator | Monday 02 February 2026 06:01:13 +0000 (0:00:01.171) 0:27:40.825 ******* 2026-02-02 06:01:49.178335 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:01:49.178347 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 06:01:49.178358 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:01:49.178369 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:01:49.178380 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:01:49.178422 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:01:49.178436 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:01:49.178447 | orchestrator | 2026-02-02 06:01:49.178458 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 06:01:49.178482 | orchestrator | Monday 02 February 2026 06:01:15 +0000 (0:00:02.288) 0:27:43.114 ******* 2026-02-02 06:01:49.178494 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:01:49.178507 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 06:01:49.178520 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:01:49.178535 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:01:49.178556 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:01:49.178574 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:01:49.178592 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:01:49.178614 | orchestrator | 2026-02-02 06:01:49.178637 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 06:01:49.178659 | orchestrator | Monday 02 February 2026 06:01:17 +0000 (0:00:02.372) 0:27:45.486 ******* 2026-02-02 06:01:49.178679 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-02 06:01:49.178694 | orchestrator | 2026-02-02 06:01:49.178706 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 06:01:49.178719 | orchestrator | Monday 02 February 2026 06:01:19 +0000 (0:00:01.167) 0:27:46.654 ******* 2026-02-02 06:01:49.178733 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-02 06:01:49.178746 | orchestrator | 2026-02-02 06:01:49.178759 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 06:01:49.178773 | orchestrator | Monday 02 February 2026 06:01:20 +0000 (0:00:01.184) 0:27:47.839 ******* 2026-02-02 06:01:49.178786 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:01:49.178799 | orchestrator | 2026-02-02 06:01:49.178812 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 06:01:49.178825 | orchestrator | Monday 02 February 2026 06:01:22 +0000 (0:00:01.987) 0:27:49.826 ******* 2026-02-02 06:01:49.178838 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.178851 | orchestrator | 2026-02-02 06:01:49.178863 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 06:01:49.178873 | orchestrator | Monday 02 February 2026 06:01:23 +0000 (0:00:01.136) 0:27:50.962 ******* 2026-02-02 06:01:49.178884 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.178895 | orchestrator | 2026-02-02 06:01:49.178905 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 06:01:49.178926 | orchestrator | Monday 02 February 2026 06:01:24 +0000 (0:00:01.164) 0:27:52.126 ******* 2026-02-02 06:01:49.178937 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.178948 | orchestrator | 2026-02-02 06:01:49.178959 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 06:01:49.178969 | orchestrator | Monday 02 February 2026 06:01:25 +0000 (0:00:01.143) 0:27:53.270 ******* 2026-02-02 06:01:49.178980 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:01:49.178991 | orchestrator | 2026-02-02 06:01:49.179001 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 06:01:49.179013 | orchestrator | Monday 02 February 2026 06:01:27 +0000 (0:00:01.542) 0:27:54.813 ******* 2026-02-02 06:01:49.179023 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179034 | orchestrator | 2026-02-02 06:01:49.179045 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 06:01:49.179056 | orchestrator | Monday 02 February 2026 06:01:28 +0000 (0:00:01.098) 0:27:55.911 ******* 2026-02-02 06:01:49.179067 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179077 | orchestrator | 2026-02-02 06:01:49.179088 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 06:01:49.179099 | orchestrator | Monday 02 February 2026 06:01:29 +0000 (0:00:01.191) 0:27:57.102 ******* 2026-02-02 06:01:49.179110 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:01:49.179120 | orchestrator | 2026-02-02 06:01:49.179131 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 06:01:49.179142 | orchestrator | Monday 02 February 2026 06:01:31 +0000 (0:00:01.635) 0:27:58.738 ******* 2026-02-02 06:01:49.179152 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:01:49.179163 | orchestrator | 2026-02-02 06:01:49.179174 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 06:01:49.179207 | orchestrator | Monday 02 February 2026 06:01:32 +0000 (0:00:01.514) 0:28:00.252 ******* 2026-02-02 06:01:49.179218 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179229 | orchestrator | 2026-02-02 06:01:49.179240 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 06:01:49.179251 | orchestrator | Monday 02 February 2026 06:01:33 +0000 (0:00:00.777) 0:28:01.030 ******* 2026-02-02 06:01:49.179261 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:01:49.179272 | orchestrator | 2026-02-02 06:01:49.179283 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 06:01:49.179293 | orchestrator | Monday 02 February 2026 06:01:34 +0000 (0:00:00.819) 0:28:01.850 ******* 2026-02-02 06:01:49.179304 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179315 | orchestrator | 2026-02-02 06:01:49.179326 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 06:01:49.179337 | orchestrator | Monday 02 February 2026 06:01:35 +0000 (0:00:00.768) 0:28:02.618 ******* 2026-02-02 06:01:49.179347 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179358 | orchestrator | 2026-02-02 06:01:49.179369 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 06:01:49.179380 | orchestrator | Monday 02 February 2026 06:01:35 +0000 (0:00:00.787) 0:28:03.406 ******* 2026-02-02 06:01:49.179411 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179423 | orchestrator | 2026-02-02 06:01:49.179433 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 06:01:49.179444 | orchestrator | Monday 02 February 2026 06:01:36 +0000 (0:00:00.809) 0:28:04.216 ******* 2026-02-02 06:01:49.179461 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179472 | orchestrator | 2026-02-02 06:01:49.179483 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 06:01:49.179494 | orchestrator | Monday 02 February 2026 06:01:37 +0000 (0:00:00.786) 0:28:05.002 ******* 2026-02-02 06:01:49.179504 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179515 | orchestrator | 2026-02-02 06:01:49.179526 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 06:01:49.179543 | orchestrator | Monday 02 February 2026 06:01:38 +0000 (0:00:00.759) 0:28:05.761 ******* 2026-02-02 06:01:49.179554 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:01:49.179564 | orchestrator | 2026-02-02 06:01:49.179575 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 06:01:49.179586 | orchestrator | Monday 02 February 2026 06:01:38 +0000 (0:00:00.763) 0:28:06.524 ******* 2026-02-02 06:01:49.179597 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:01:49.179607 | orchestrator | 2026-02-02 06:01:49.179618 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 06:01:49.179629 | orchestrator | Monday 02 February 2026 06:01:39 +0000 (0:00:00.772) 0:28:07.297 ******* 2026-02-02 06:01:49.179640 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:01:49.179650 | orchestrator | 2026-02-02 06:01:49.179661 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 06:01:49.179672 | orchestrator | Monday 02 February 2026 06:01:40 +0000 (0:00:00.847) 0:28:08.144 ******* 2026-02-02 06:01:49.179683 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179693 | orchestrator | 2026-02-02 06:01:49.179704 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 06:01:49.179715 | orchestrator | Monday 02 February 2026 06:01:41 +0000 (0:00:00.805) 0:28:08.950 ******* 2026-02-02 06:01:49.179726 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179736 | orchestrator | 2026-02-02 06:01:49.179747 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 06:01:49.179758 | orchestrator | Monday 02 February 2026 06:01:42 +0000 (0:00:00.809) 0:28:09.759 ******* 2026-02-02 06:01:49.179768 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179779 | orchestrator | 2026-02-02 06:01:49.179790 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 06:01:49.179800 | orchestrator | Monday 02 February 2026 06:01:42 +0000 (0:00:00.770) 0:28:10.530 ******* 2026-02-02 06:01:49.179811 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179822 | orchestrator | 2026-02-02 06:01:49.179832 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 06:01:49.179843 | orchestrator | Monday 02 February 2026 06:01:43 +0000 (0:00:00.763) 0:28:11.293 ******* 2026-02-02 06:01:49.179854 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179864 | orchestrator | 2026-02-02 06:01:49.179875 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 06:01:49.179886 | orchestrator | Monday 02 February 2026 06:01:44 +0000 (0:00:00.764) 0:28:12.057 ******* 2026-02-02 06:01:49.179896 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179907 | orchestrator | 2026-02-02 06:01:49.179918 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 06:01:49.179928 | orchestrator | Monday 02 February 2026 06:01:45 +0000 (0:00:00.742) 0:28:12.800 ******* 2026-02-02 06:01:49.179939 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179950 | orchestrator | 2026-02-02 06:01:49.179961 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 06:01:49.179971 | orchestrator | Monday 02 February 2026 06:01:46 +0000 (0:00:00.858) 0:28:13.659 ******* 2026-02-02 06:01:49.179982 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.179993 | orchestrator | 2026-02-02 06:01:49.180004 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 06:01:49.180015 | orchestrator | Monday 02 February 2026 06:01:46 +0000 (0:00:00.747) 0:28:14.406 ******* 2026-02-02 06:01:49.180025 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.180036 | orchestrator | 2026-02-02 06:01:49.180047 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 06:01:49.180057 | orchestrator | Monday 02 February 2026 06:01:47 +0000 (0:00:00.783) 0:28:15.189 ******* 2026-02-02 06:01:49.180068 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.180079 | orchestrator | 2026-02-02 06:01:49.180090 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 06:01:49.180106 | orchestrator | Monday 02 February 2026 06:01:48 +0000 (0:00:00.769) 0:28:15.959 ******* 2026-02-02 06:01:49.180117 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:01:49.180128 | orchestrator | 2026-02-02 06:01:49.180146 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 06:02:34.550307 | orchestrator | Monday 02 February 2026 06:01:49 +0000 (0:00:00.791) 0:28:16.751 ******* 2026-02-02 06:02:34.550446 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.550460 | orchestrator | 2026-02-02 06:02:34.550469 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 06:02:34.550477 | orchestrator | Monday 02 February 2026 06:01:49 +0000 (0:00:00.761) 0:28:17.512 ******* 2026-02-02 06:02:34.550484 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:02:34.550492 | orchestrator | 2026-02-02 06:02:34.550500 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 06:02:34.550508 | orchestrator | Monday 02 February 2026 06:01:51 +0000 (0:00:01.656) 0:28:19.169 ******* 2026-02-02 06:02:34.550515 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:02:34.550522 | orchestrator | 2026-02-02 06:02:34.550529 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 06:02:34.550537 | orchestrator | Monday 02 February 2026 06:01:53 +0000 (0:00:01.992) 0:28:21.162 ******* 2026-02-02 06:02:34.550544 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-02 06:02:34.550552 | orchestrator | 2026-02-02 06:02:34.550560 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 06:02:34.550568 | orchestrator | Monday 02 February 2026 06:01:54 +0000 (0:00:01.113) 0:28:22.276 ******* 2026-02-02 06:02:34.550590 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.550598 | orchestrator | 2026-02-02 06:02:34.550605 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 06:02:34.550612 | orchestrator | Monday 02 February 2026 06:01:55 +0000 (0:00:01.139) 0:28:23.416 ******* 2026-02-02 06:02:34.550619 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.550626 | orchestrator | 2026-02-02 06:02:34.550633 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 06:02:34.550641 | orchestrator | Monday 02 February 2026 06:01:56 +0000 (0:00:01.137) 0:28:24.553 ******* 2026-02-02 06:02:34.550648 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 06:02:34.550655 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 06:02:34.550662 | orchestrator | 2026-02-02 06:02:34.550669 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 06:02:34.550677 | orchestrator | Monday 02 February 2026 06:01:58 +0000 (0:00:01.870) 0:28:26.424 ******* 2026-02-02 06:02:34.550684 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:02:34.550691 | orchestrator | 2026-02-02 06:02:34.550698 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 06:02:34.550706 | orchestrator | Monday 02 February 2026 06:02:00 +0000 (0:00:01.524) 0:28:27.949 ******* 2026-02-02 06:02:34.550713 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.550720 | orchestrator | 2026-02-02 06:02:34.550727 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 06:02:34.550734 | orchestrator | Monday 02 February 2026 06:02:01 +0000 (0:00:01.172) 0:28:29.122 ******* 2026-02-02 06:02:34.550742 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.550749 | orchestrator | 2026-02-02 06:02:34.550756 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 06:02:34.550763 | orchestrator | Monday 02 February 2026 06:02:02 +0000 (0:00:00.785) 0:28:29.907 ******* 2026-02-02 06:02:34.550770 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.550777 | orchestrator | 2026-02-02 06:02:34.550784 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 06:02:34.550792 | orchestrator | Monday 02 February 2026 06:02:03 +0000 (0:00:00.768) 0:28:30.676 ******* 2026-02-02 06:02:34.550818 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-02 06:02:34.550825 | orchestrator | 2026-02-02 06:02:34.550832 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 06:02:34.550840 | orchestrator | Monday 02 February 2026 06:02:04 +0000 (0:00:01.116) 0:28:31.792 ******* 2026-02-02 06:02:34.550847 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:02:34.550854 | orchestrator | 2026-02-02 06:02:34.550862 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 06:02:34.550870 | orchestrator | Monday 02 February 2026 06:02:05 +0000 (0:00:01.737) 0:28:33.530 ******* 2026-02-02 06:02:34.550879 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 06:02:34.550888 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 06:02:34.550897 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 06:02:34.550905 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.550914 | orchestrator | 2026-02-02 06:02:34.550922 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 06:02:34.550930 | orchestrator | Monday 02 February 2026 06:02:07 +0000 (0:00:01.157) 0:28:34.688 ******* 2026-02-02 06:02:34.550939 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.550947 | orchestrator | 2026-02-02 06:02:34.550956 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 06:02:34.550964 | orchestrator | Monday 02 February 2026 06:02:08 +0000 (0:00:01.103) 0:28:35.792 ******* 2026-02-02 06:02:34.550972 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.550979 | orchestrator | 2026-02-02 06:02:34.550986 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 06:02:34.550993 | orchestrator | Monday 02 February 2026 06:02:09 +0000 (0:00:01.193) 0:28:36.985 ******* 2026-02-02 06:02:34.551000 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.551007 | orchestrator | 2026-02-02 06:02:34.551015 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 06:02:34.551022 | orchestrator | Monday 02 February 2026 06:02:10 +0000 (0:00:01.178) 0:28:38.164 ******* 2026-02-02 06:02:34.551029 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.551036 | orchestrator | 2026-02-02 06:02:34.551057 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 06:02:34.551065 | orchestrator | Monday 02 February 2026 06:02:11 +0000 (0:00:01.139) 0:28:39.303 ******* 2026-02-02 06:02:34.551072 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.551079 | orchestrator | 2026-02-02 06:02:34.551086 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 06:02:34.551094 | orchestrator | Monday 02 February 2026 06:02:12 +0000 (0:00:00.871) 0:28:40.175 ******* 2026-02-02 06:02:34.551101 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:02:34.551108 | orchestrator | 2026-02-02 06:02:34.551115 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 06:02:34.551122 | orchestrator | Monday 02 February 2026 06:02:14 +0000 (0:00:02.235) 0:28:42.411 ******* 2026-02-02 06:02:34.551129 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:02:34.551137 | orchestrator | 2026-02-02 06:02:34.551144 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 06:02:34.551151 | orchestrator | Monday 02 February 2026 06:02:15 +0000 (0:00:00.775) 0:28:43.186 ******* 2026-02-02 06:02:34.551158 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-02 06:02:34.551165 | orchestrator | 2026-02-02 06:02:34.551172 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 06:02:34.551179 | orchestrator | Monday 02 February 2026 06:02:16 +0000 (0:00:01.114) 0:28:44.301 ******* 2026-02-02 06:02:34.551190 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.551198 | orchestrator | 2026-02-02 06:02:34.551205 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 06:02:34.551216 | orchestrator | Monday 02 February 2026 06:02:17 +0000 (0:00:01.127) 0:28:45.429 ******* 2026-02-02 06:02:34.551223 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.551231 | orchestrator | 2026-02-02 06:02:34.551238 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 06:02:34.551245 | orchestrator | Monday 02 February 2026 06:02:18 +0000 (0:00:01.123) 0:28:46.552 ******* 2026-02-02 06:02:34.551252 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.551259 | orchestrator | 2026-02-02 06:02:34.551266 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 06:02:34.551274 | orchestrator | Monday 02 February 2026 06:02:20 +0000 (0:00:01.116) 0:28:47.669 ******* 2026-02-02 06:02:34.551281 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.551288 | orchestrator | 2026-02-02 06:02:34.551295 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 06:02:34.551302 | orchestrator | Monday 02 February 2026 06:02:21 +0000 (0:00:01.169) 0:28:48.839 ******* 2026-02-02 06:02:34.551309 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.551316 | orchestrator | 2026-02-02 06:02:34.551324 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 06:02:34.551331 | orchestrator | Monday 02 February 2026 06:02:22 +0000 (0:00:01.158) 0:28:49.998 ******* 2026-02-02 06:02:34.551338 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.551345 | orchestrator | 2026-02-02 06:02:34.551352 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 06:02:34.551359 | orchestrator | Monday 02 February 2026 06:02:23 +0000 (0:00:01.169) 0:28:51.167 ******* 2026-02-02 06:02:34.551366 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.551394 | orchestrator | 2026-02-02 06:02:34.551402 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 06:02:34.551409 | orchestrator | Monday 02 February 2026 06:02:24 +0000 (0:00:01.155) 0:28:52.323 ******* 2026-02-02 06:02:34.551416 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:02:34.551423 | orchestrator | 2026-02-02 06:02:34.551430 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 06:02:34.551437 | orchestrator | Monday 02 February 2026 06:02:25 +0000 (0:00:01.164) 0:28:53.487 ******* 2026-02-02 06:02:34.551444 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:02:34.551451 | orchestrator | 2026-02-02 06:02:34.551458 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 06:02:34.551465 | orchestrator | Monday 02 February 2026 06:02:26 +0000 (0:00:00.933) 0:28:54.421 ******* 2026-02-02 06:02:34.551472 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-02 06:02:34.551480 | orchestrator | 2026-02-02 06:02:34.551487 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 06:02:34.551494 | orchestrator | Monday 02 February 2026 06:02:27 +0000 (0:00:01.130) 0:28:55.551 ******* 2026-02-02 06:02:34.551501 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-02 06:02:34.551508 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-02 06:02:34.551515 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-02 06:02:34.551522 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-02 06:02:34.551529 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-02 06:02:34.551536 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-02 06:02:34.551543 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-02 06:02:34.551550 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-02 06:02:34.551557 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 06:02:34.551564 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 06:02:34.551571 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 06:02:34.551583 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 06:02:34.551590 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 06:02:34.551597 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 06:02:34.551604 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-02 06:02:34.551611 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-02 06:02:34.551618 | orchestrator | 2026-02-02 06:02:34.551629 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 06:03:15.382732 | orchestrator | Monday 02 February 2026 06:02:34 +0000 (0:00:06.557) 0:29:02.108 ******* 2026-02-02 06:03:15.382842 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.382859 | orchestrator | 2026-02-02 06:03:15.382871 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 06:03:15.382882 | orchestrator | Monday 02 February 2026 06:02:35 +0000 (0:00:00.791) 0:29:02.900 ******* 2026-02-02 06:03:15.382892 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.382903 | orchestrator | 2026-02-02 06:03:15.382914 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 06:03:15.382925 | orchestrator | Monday 02 February 2026 06:02:36 +0000 (0:00:00.780) 0:29:03.680 ******* 2026-02-02 06:03:15.382936 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.382947 | orchestrator | 2026-02-02 06:03:15.382958 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 06:03:15.382969 | orchestrator | Monday 02 February 2026 06:02:36 +0000 (0:00:00.763) 0:29:04.444 ******* 2026-02-02 06:03:15.382978 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.382984 | orchestrator | 2026-02-02 06:03:15.382990 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 06:03:15.382996 | orchestrator | Monday 02 February 2026 06:02:37 +0000 (0:00:00.758) 0:29:05.202 ******* 2026-02-02 06:03:15.383003 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383009 | orchestrator | 2026-02-02 06:03:15.383030 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 06:03:15.383036 | orchestrator | Monday 02 February 2026 06:02:38 +0000 (0:00:00.792) 0:29:05.994 ******* 2026-02-02 06:03:15.383043 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383049 | orchestrator | 2026-02-02 06:03:15.383055 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 06:03:15.383063 | orchestrator | Monday 02 February 2026 06:02:39 +0000 (0:00:00.746) 0:29:06.741 ******* 2026-02-02 06:03:15.383069 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383075 | orchestrator | 2026-02-02 06:03:15.383082 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 06:03:15.383088 | orchestrator | Monday 02 February 2026 06:02:39 +0000 (0:00:00.763) 0:29:07.505 ******* 2026-02-02 06:03:15.383094 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383100 | orchestrator | 2026-02-02 06:03:15.383107 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 06:03:15.383113 | orchestrator | Monday 02 February 2026 06:02:40 +0000 (0:00:00.757) 0:29:08.263 ******* 2026-02-02 06:03:15.383120 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383126 | orchestrator | 2026-02-02 06:03:15.383132 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 06:03:15.383139 | orchestrator | Monday 02 February 2026 06:02:41 +0000 (0:00:00.804) 0:29:09.067 ******* 2026-02-02 06:03:15.383145 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383151 | orchestrator | 2026-02-02 06:03:15.383158 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 06:03:15.383164 | orchestrator | Monday 02 February 2026 06:02:42 +0000 (0:00:00.884) 0:29:09.952 ******* 2026-02-02 06:03:15.383170 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383176 | orchestrator | 2026-02-02 06:03:15.383202 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 06:03:15.383213 | orchestrator | Monday 02 February 2026 06:02:43 +0000 (0:00:00.772) 0:29:10.725 ******* 2026-02-02 06:03:15.383223 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383233 | orchestrator | 2026-02-02 06:03:15.383244 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 06:03:15.383253 | orchestrator | Monday 02 February 2026 06:02:43 +0000 (0:00:00.786) 0:29:11.511 ******* 2026-02-02 06:03:15.383263 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383275 | orchestrator | 2026-02-02 06:03:15.383285 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 06:03:15.383296 | orchestrator | Monday 02 February 2026 06:02:44 +0000 (0:00:00.856) 0:29:12.368 ******* 2026-02-02 06:03:15.383307 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383319 | orchestrator | 2026-02-02 06:03:15.383328 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 06:03:15.383336 | orchestrator | Monday 02 February 2026 06:02:45 +0000 (0:00:00.836) 0:29:13.205 ******* 2026-02-02 06:03:15.383343 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383383 | orchestrator | 2026-02-02 06:03:15.383396 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 06:03:15.383406 | orchestrator | Monday 02 February 2026 06:02:46 +0000 (0:00:00.897) 0:29:14.102 ******* 2026-02-02 06:03:15.383416 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383426 | orchestrator | 2026-02-02 06:03:15.383436 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 06:03:15.383446 | orchestrator | Monday 02 February 2026 06:02:47 +0000 (0:00:00.767) 0:29:14.870 ******* 2026-02-02 06:03:15.383516 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383526 | orchestrator | 2026-02-02 06:03:15.383534 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:03:15.383544 | orchestrator | Monday 02 February 2026 06:02:48 +0000 (0:00:00.737) 0:29:15.608 ******* 2026-02-02 06:03:15.383551 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383559 | orchestrator | 2026-02-02 06:03:15.383566 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:03:15.383574 | orchestrator | Monday 02 February 2026 06:02:48 +0000 (0:00:00.800) 0:29:16.409 ******* 2026-02-02 06:03:15.383581 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383588 | orchestrator | 2026-02-02 06:03:15.383596 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:03:15.383603 | orchestrator | Monday 02 February 2026 06:02:49 +0000 (0:00:00.770) 0:29:17.180 ******* 2026-02-02 06:03:15.383611 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383618 | orchestrator | 2026-02-02 06:03:15.383642 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:03:15.383649 | orchestrator | Monday 02 February 2026 06:02:50 +0000 (0:00:00.828) 0:29:18.008 ******* 2026-02-02 06:03:15.383655 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383661 | orchestrator | 2026-02-02 06:03:15.383668 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:03:15.383674 | orchestrator | Monday 02 February 2026 06:02:51 +0000 (0:00:00.787) 0:29:18.796 ******* 2026-02-02 06:03:15.383680 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-02 06:03:15.383687 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-02 06:03:15.383693 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-02 06:03:15.383699 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383705 | orchestrator | 2026-02-02 06:03:15.383712 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:03:15.383718 | orchestrator | Monday 02 February 2026 06:02:52 +0000 (0:00:01.412) 0:29:20.208 ******* 2026-02-02 06:03:15.383724 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-02 06:03:15.383739 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-02 06:03:15.383745 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-02 06:03:15.383757 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383764 | orchestrator | 2026-02-02 06:03:15.383770 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:03:15.383776 | orchestrator | Monday 02 February 2026 06:02:54 +0000 (0:00:01.515) 0:29:21.724 ******* 2026-02-02 06:03:15.383782 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-02 06:03:15.383788 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-02 06:03:15.383794 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-02 06:03:15.383800 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383807 | orchestrator | 2026-02-02 06:03:15.383813 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:03:15.383819 | orchestrator | Monday 02 February 2026 06:02:55 +0000 (0:00:01.073) 0:29:22.797 ******* 2026-02-02 06:03:15.383826 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383836 | orchestrator | 2026-02-02 06:03:15.383847 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:03:15.383857 | orchestrator | Monday 02 February 2026 06:02:56 +0000 (0:00:00.799) 0:29:23.597 ******* 2026-02-02 06:03:15.383868 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-02 06:03:15.383877 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.383887 | orchestrator | 2026-02-02 06:03:15.383896 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 06:03:15.383905 | orchestrator | Monday 02 February 2026 06:02:56 +0000 (0:00:00.936) 0:29:24.534 ******* 2026-02-02 06:03:15.383916 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:03:15.383926 | orchestrator | 2026-02-02 06:03:15.383937 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-02 06:03:15.383947 | orchestrator | Monday 02 February 2026 06:02:58 +0000 (0:00:01.427) 0:29:25.961 ******* 2026-02-02 06:03:15.383959 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:03:15.383972 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-02 06:03:15.383978 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:03:15.383984 | orchestrator | 2026-02-02 06:03:15.383991 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-02 06:03:15.384001 | orchestrator | Monday 02 February 2026 06:02:59 +0000 (0:00:01.316) 0:29:27.278 ******* 2026-02-02 06:03:15.384010 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-02-02 06:03:15.384020 | orchestrator | 2026-02-02 06:03:15.384030 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-02 06:03:15.384040 | orchestrator | Monday 02 February 2026 06:03:00 +0000 (0:00:01.177) 0:29:28.456 ******* 2026-02-02 06:03:15.384050 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:03:15.384061 | orchestrator | 2026-02-02 06:03:15.384071 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-02 06:03:15.384082 | orchestrator | Monday 02 February 2026 06:03:02 +0000 (0:00:01.532) 0:29:29.988 ******* 2026-02-02 06:03:15.384093 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:03:15.384104 | orchestrator | 2026-02-02 06:03:15.384110 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-02 06:03:15.384116 | orchestrator | Monday 02 February 2026 06:03:03 +0000 (0:00:01.127) 0:29:31.116 ******* 2026-02-02 06:03:15.384123 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:03:15.384133 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:03:15.384143 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:03:15.384153 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-02-02 06:03:15.384171 | orchestrator | 2026-02-02 06:03:15.384180 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-02 06:03:15.384189 | orchestrator | Monday 02 February 2026 06:03:11 +0000 (0:00:07.478) 0:29:38.594 ******* 2026-02-02 06:03:15.384199 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:03:15.384210 | orchestrator | 2026-02-02 06:03:15.384219 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-02 06:03:15.384229 | orchestrator | Monday 02 February 2026 06:03:12 +0000 (0:00:01.237) 0:29:39.832 ******* 2026-02-02 06:03:15.384239 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-02 06:03:15.384249 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-02 06:03:15.384257 | orchestrator | 2026-02-02 06:03:15.384277 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:04:02.952331 | orchestrator | Monday 02 February 2026 06:03:15 +0000 (0:00:03.122) 0:29:42.954 ******* 2026-02-02 06:04:02.952525 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-02 06:04:02.952544 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-02 06:04:02.952554 | orchestrator | 2026-02-02 06:04:02.952562 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-02 06:04:02.952570 | orchestrator | Monday 02 February 2026 06:03:17 +0000 (0:00:02.006) 0:29:44.961 ******* 2026-02-02 06:04:02.952577 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:04:02.952588 | orchestrator | 2026-02-02 06:04:02.952600 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-02 06:04:02.952611 | orchestrator | Monday 02 February 2026 06:03:18 +0000 (0:00:01.475) 0:29:46.436 ******* 2026-02-02 06:04:02.952623 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:04:02.952634 | orchestrator | 2026-02-02 06:04:02.952645 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-02 06:04:02.952655 | orchestrator | Monday 02 February 2026 06:03:19 +0000 (0:00:00.766) 0:29:47.202 ******* 2026-02-02 06:04:02.952667 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:04:02.952678 | orchestrator | 2026-02-02 06:04:02.952689 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-02 06:04:02.952701 | orchestrator | Monday 02 February 2026 06:03:20 +0000 (0:00:00.746) 0:29:47.949 ******* 2026-02-02 06:04:02.952731 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-02-02 06:04:02.952742 | orchestrator | 2026-02-02 06:04:02.952749 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-02 06:04:02.952761 | orchestrator | Monday 02 February 2026 06:03:21 +0000 (0:00:01.124) 0:29:49.074 ******* 2026-02-02 06:04:02.952772 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:04:02.952784 | orchestrator | 2026-02-02 06:04:02.952794 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-02 06:04:02.952806 | orchestrator | Monday 02 February 2026 06:03:22 +0000 (0:00:01.153) 0:29:50.228 ******* 2026-02-02 06:04:02.952817 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:04:02.952829 | orchestrator | 2026-02-02 06:04:02.952841 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-02 06:04:02.952852 | orchestrator | Monday 02 February 2026 06:03:23 +0000 (0:00:01.160) 0:29:51.388 ******* 2026-02-02 06:04:02.952864 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-02-02 06:04:02.952876 | orchestrator | 2026-02-02 06:04:02.952887 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-02 06:04:02.952895 | orchestrator | Monday 02 February 2026 06:03:24 +0000 (0:00:01.136) 0:29:52.525 ******* 2026-02-02 06:04:02.952903 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:04:02.952911 | orchestrator | 2026-02-02 06:04:02.952919 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-02 06:04:02.952927 | orchestrator | Monday 02 February 2026 06:03:26 +0000 (0:00:02.049) 0:29:54.575 ******* 2026-02-02 06:04:02.952960 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:04:02.952972 | orchestrator | 2026-02-02 06:04:02.952984 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-02 06:04:02.952996 | orchestrator | Monday 02 February 2026 06:03:28 +0000 (0:00:01.973) 0:29:56.549 ******* 2026-02-02 06:04:02.953007 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:04:02.953018 | orchestrator | 2026-02-02 06:04:02.953030 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-02 06:04:02.953041 | orchestrator | Monday 02 February 2026 06:03:31 +0000 (0:00:02.899) 0:29:59.449 ******* 2026-02-02 06:04:02.953053 | orchestrator | changed: [testbed-node-1] 2026-02-02 06:04:02.953066 | orchestrator | 2026-02-02 06:04:02.953078 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-02 06:04:02.953092 | orchestrator | Monday 02 February 2026 06:03:35 +0000 (0:00:03.511) 0:30:02.960 ******* 2026-02-02 06:04:02.953103 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:04:02.953115 | orchestrator | 2026-02-02 06:04:02.953127 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-02 06:04:02.953140 | orchestrator | 2026-02-02 06:04:02.953150 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-02 06:04:02.953162 | orchestrator | Monday 02 February 2026 06:03:36 +0000 (0:00:01.048) 0:30:04.008 ******* 2026-02-02 06:04:02.953174 | orchestrator | changed: [testbed-node-2] 2026-02-02 06:04:02.953187 | orchestrator | 2026-02-02 06:04:02.953200 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-02 06:04:02.953211 | orchestrator | Monday 02 February 2026 06:03:38 +0000 (0:00:02.433) 0:30:06.442 ******* 2026-02-02 06:04:02.953224 | orchestrator | changed: [testbed-node-2] 2026-02-02 06:04:02.953235 | orchestrator | 2026-02-02 06:04:02.953247 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 06:04:02.953258 | orchestrator | Monday 02 February 2026 06:03:40 +0000 (0:00:02.031) 0:30:08.474 ******* 2026-02-02 06:04:02.953269 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-02 06:04:02.953280 | orchestrator | 2026-02-02 06:04:02.953291 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 06:04:02.953302 | orchestrator | Monday 02 February 2026 06:03:42 +0000 (0:00:01.117) 0:30:09.592 ******* 2026-02-02 06:04:02.953314 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:02.953322 | orchestrator | 2026-02-02 06:04:02.953328 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 06:04:02.953358 | orchestrator | Monday 02 February 2026 06:03:43 +0000 (0:00:01.499) 0:30:11.092 ******* 2026-02-02 06:04:02.953372 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:02.953384 | orchestrator | 2026-02-02 06:04:02.953396 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:04:02.953409 | orchestrator | Monday 02 February 2026 06:03:44 +0000 (0:00:01.117) 0:30:12.209 ******* 2026-02-02 06:04:02.953416 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:02.953423 | orchestrator | 2026-02-02 06:04:02.953429 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:04:02.953455 | orchestrator | Monday 02 February 2026 06:03:46 +0000 (0:00:01.452) 0:30:13.663 ******* 2026-02-02 06:04:02.953462 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:02.953468 | orchestrator | 2026-02-02 06:04:02.953475 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 06:04:02.953482 | orchestrator | Monday 02 February 2026 06:03:47 +0000 (0:00:01.183) 0:30:14.847 ******* 2026-02-02 06:04:02.953488 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:02.953495 | orchestrator | 2026-02-02 06:04:02.953501 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 06:04:02.953508 | orchestrator | Monday 02 February 2026 06:03:48 +0000 (0:00:01.111) 0:30:15.958 ******* 2026-02-02 06:04:02.953514 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:02.953521 | orchestrator | 2026-02-02 06:04:02.953527 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 06:04:02.953543 | orchestrator | Monday 02 February 2026 06:03:49 +0000 (0:00:01.139) 0:30:17.098 ******* 2026-02-02 06:04:02.953550 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:02.953556 | orchestrator | 2026-02-02 06:04:02.953563 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 06:04:02.953598 | orchestrator | Monday 02 February 2026 06:03:50 +0000 (0:00:01.119) 0:30:18.218 ******* 2026-02-02 06:04:02.953605 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:02.953611 | orchestrator | 2026-02-02 06:04:02.953618 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 06:04:02.953631 | orchestrator | Monday 02 February 2026 06:03:51 +0000 (0:00:01.203) 0:30:19.422 ******* 2026-02-02 06:04:02.953638 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:04:02.953644 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:04:02.953651 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 06:04:02.953658 | orchestrator | 2026-02-02 06:04:02.953664 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 06:04:02.953671 | orchestrator | Monday 02 February 2026 06:03:53 +0000 (0:00:01.806) 0:30:21.228 ******* 2026-02-02 06:04:02.953678 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:02.953684 | orchestrator | 2026-02-02 06:04:02.953691 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 06:04:02.953698 | orchestrator | Monday 02 February 2026 06:03:54 +0000 (0:00:01.332) 0:30:22.561 ******* 2026-02-02 06:04:02.953704 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:04:02.953715 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:04:02.953727 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 06:04:02.953739 | orchestrator | 2026-02-02 06:04:02.953751 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 06:04:02.953763 | orchestrator | Monday 02 February 2026 06:03:57 +0000 (0:00:02.764) 0:30:25.325 ******* 2026-02-02 06:04:02.953775 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 06:04:02.953788 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 06:04:02.953801 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 06:04:02.953813 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:02.953825 | orchestrator | 2026-02-02 06:04:02.953837 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 06:04:02.953849 | orchestrator | Monday 02 February 2026 06:03:59 +0000 (0:00:01.512) 0:30:26.838 ******* 2026-02-02 06:04:02.953865 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 06:04:02.953880 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 06:04:02.953892 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 06:04:02.954169 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:02.954190 | orchestrator | 2026-02-02 06:04:02.954202 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 06:04:02.954214 | orchestrator | Monday 02 February 2026 06:04:01 +0000 (0:00:02.516) 0:30:29.354 ******* 2026-02-02 06:04:02.954228 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:04:02.954285 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:04:23.203296 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:04:23.203444 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:23.203462 | orchestrator | 2026-02-02 06:04:23.203475 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 06:04:23.203488 | orchestrator | Monday 02 February 2026 06:04:02 +0000 (0:00:01.168) 0:30:30.522 ******* 2026-02-02 06:04:23.203517 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 06:03:55.444543', 'end': '2026-02-02 06:03:55.495017', 'delta': '0:00:00.050474', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 06:04:23.203533 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'c530967d0aad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 06:03:55.998509', 'end': '2026-02-02 06:03:56.044483', 'delta': '0:00:00.045974', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c530967d0aad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 06:04:23.203545 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'a68c96a70534', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 06:03:56.510872', 'end': '2026-02-02 06:03:56.558620', 'delta': '0:00:00.047748', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a68c96a70534'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 06:04:23.203557 | orchestrator | 2026-02-02 06:04:23.203568 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 06:04:23.203579 | orchestrator | Monday 02 February 2026 06:04:04 +0000 (0:00:01.240) 0:30:31.762 ******* 2026-02-02 06:04:23.203613 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:23.203626 | orchestrator | 2026-02-02 06:04:23.203637 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 06:04:23.203648 | orchestrator | Monday 02 February 2026 06:04:05 +0000 (0:00:01.350) 0:30:33.113 ******* 2026-02-02 06:04:23.203664 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:23.203683 | orchestrator | 2026-02-02 06:04:23.203699 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 06:04:23.203719 | orchestrator | Monday 02 February 2026 06:04:07 +0000 (0:00:01.617) 0:30:34.731 ******* 2026-02-02 06:04:23.203738 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:23.203757 | orchestrator | 2026-02-02 06:04:23.203770 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 06:04:23.203781 | orchestrator | Monday 02 February 2026 06:04:08 +0000 (0:00:01.257) 0:30:35.989 ******* 2026-02-02 06:04:23.203791 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:04:23.203802 | orchestrator | 2026-02-02 06:04:23.203813 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:04:23.203824 | orchestrator | Monday 02 February 2026 06:04:10 +0000 (0:00:01.919) 0:30:37.909 ******* 2026-02-02 06:04:23.203836 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:23.203850 | orchestrator | 2026-02-02 06:04:23.203863 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 06:04:23.203877 | orchestrator | Monday 02 February 2026 06:04:11 +0000 (0:00:01.151) 0:30:39.061 ******* 2026-02-02 06:04:23.203924 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:23.203938 | orchestrator | 2026-02-02 06:04:23.203952 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 06:04:23.203965 | orchestrator | Monday 02 February 2026 06:04:12 +0000 (0:00:01.100) 0:30:40.162 ******* 2026-02-02 06:04:23.203978 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:23.203991 | orchestrator | 2026-02-02 06:04:23.204004 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:04:23.204018 | orchestrator | Monday 02 February 2026 06:04:13 +0000 (0:00:01.278) 0:30:41.440 ******* 2026-02-02 06:04:23.204031 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:23.204043 | orchestrator | 2026-02-02 06:04:23.204055 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 06:04:23.204068 | orchestrator | Monday 02 February 2026 06:04:15 +0000 (0:00:01.159) 0:30:42.600 ******* 2026-02-02 06:04:23.204082 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:23.204095 | orchestrator | 2026-02-02 06:04:23.204108 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 06:04:23.204125 | orchestrator | Monday 02 February 2026 06:04:16 +0000 (0:00:01.181) 0:30:43.782 ******* 2026-02-02 06:04:23.204146 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:23.204166 | orchestrator | 2026-02-02 06:04:23.204192 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 06:04:23.204204 | orchestrator | Monday 02 February 2026 06:04:17 +0000 (0:00:01.119) 0:30:44.901 ******* 2026-02-02 06:04:23.204215 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:23.204226 | orchestrator | 2026-02-02 06:04:23.204237 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 06:04:23.204248 | orchestrator | Monday 02 February 2026 06:04:18 +0000 (0:00:01.117) 0:30:46.019 ******* 2026-02-02 06:04:23.204258 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:23.204269 | orchestrator | 2026-02-02 06:04:23.204280 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 06:04:23.204291 | orchestrator | Monday 02 February 2026 06:04:19 +0000 (0:00:01.110) 0:30:47.129 ******* 2026-02-02 06:04:23.204301 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:23.204312 | orchestrator | 2026-02-02 06:04:23.204323 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 06:04:23.204509 | orchestrator | Monday 02 February 2026 06:04:20 +0000 (0:00:01.208) 0:30:48.338 ******* 2026-02-02 06:04:23.204564 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:23.204577 | orchestrator | 2026-02-02 06:04:23.204588 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 06:04:23.204598 | orchestrator | Monday 02 February 2026 06:04:21 +0000 (0:00:01.157) 0:30:49.495 ******* 2026-02-02 06:04:23.204610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:04:23.204623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:04:23.204634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:04:23.204646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 06:04:23.204659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:04:23.204682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:04:24.643544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:04:24.643680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0dc97797', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:04:24.643732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:04:24.643748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:04:24.643764 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:24.643780 | orchestrator | 2026-02-02 06:04:24.643798 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 06:04:24.643814 | orchestrator | Monday 02 February 2026 06:04:23 +0000 (0:00:01.276) 0:30:50.771 ******* 2026-02-02 06:04:24.643851 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:04:24.643874 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:04:24.643899 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:04:24.643916 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-23-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:04:24.643934 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:04:24.643949 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:04:24.643965 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:04:24.643999 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0dc97797', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1', 'scsi-SQEMU_QEMU_HARDDISK_0dc97797-18b0-45ea-a436-4e6412a95502-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:04:59.633805 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:04:59.633908 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:04:59.633921 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.633933 | orchestrator | 2026-02-02 06:04:59.633944 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 06:04:59.633954 | orchestrator | Monday 02 February 2026 06:04:24 +0000 (0:00:01.449) 0:30:52.221 ******* 2026-02-02 06:04:59.633963 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:59.633973 | orchestrator | 2026-02-02 06:04:59.633982 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 06:04:59.633991 | orchestrator | Monday 02 February 2026 06:04:26 +0000 (0:00:01.501) 0:30:53.723 ******* 2026-02-02 06:04:59.633999 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:59.634008 | orchestrator | 2026-02-02 06:04:59.634073 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:04:59.634083 | orchestrator | Monday 02 February 2026 06:04:27 +0000 (0:00:01.152) 0:30:54.875 ******* 2026-02-02 06:04:59.634116 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:59.634132 | orchestrator | 2026-02-02 06:04:59.634153 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:04:59.634175 | orchestrator | Monday 02 February 2026 06:04:28 +0000 (0:00:01.506) 0:30:56.382 ******* 2026-02-02 06:04:59.634191 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.634206 | orchestrator | 2026-02-02 06:04:59.634221 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:04:59.634235 | orchestrator | Monday 02 February 2026 06:04:29 +0000 (0:00:01.118) 0:30:57.501 ******* 2026-02-02 06:04:59.634250 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.634265 | orchestrator | 2026-02-02 06:04:59.634298 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:04:59.634348 | orchestrator | Monday 02 February 2026 06:04:31 +0000 (0:00:01.253) 0:30:58.754 ******* 2026-02-02 06:04:59.634365 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.634380 | orchestrator | 2026-02-02 06:04:59.634395 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 06:04:59.634412 | orchestrator | Monday 02 February 2026 06:04:32 +0000 (0:00:01.175) 0:30:59.930 ******* 2026-02-02 06:04:59.634428 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-02 06:04:59.634444 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-02 06:04:59.634460 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 06:04:59.634475 | orchestrator | 2026-02-02 06:04:59.634490 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 06:04:59.634505 | orchestrator | Monday 02 February 2026 06:04:34 +0000 (0:00:01.743) 0:31:01.674 ******* 2026-02-02 06:04:59.634522 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-02 06:04:59.634537 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-02 06:04:59.634551 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-02 06:04:59.634567 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.634582 | orchestrator | 2026-02-02 06:04:59.634597 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 06:04:59.634612 | orchestrator | Monday 02 February 2026 06:04:35 +0000 (0:00:01.170) 0:31:02.844 ******* 2026-02-02 06:04:59.634628 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.634644 | orchestrator | 2026-02-02 06:04:59.634660 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 06:04:59.634676 | orchestrator | Monday 02 February 2026 06:04:36 +0000 (0:00:01.184) 0:31:04.029 ******* 2026-02-02 06:04:59.634692 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:04:59.634708 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:04:59.634723 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 06:04:59.634737 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:04:59.634753 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:04:59.634769 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:04:59.634809 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:04:59.634825 | orchestrator | 2026-02-02 06:04:59.634840 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 06:04:59.634855 | orchestrator | Monday 02 February 2026 06:04:38 +0000 (0:00:02.177) 0:31:06.206 ******* 2026-02-02 06:04:59.634870 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:04:59.634885 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:04:59.634900 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 06:04:59.634931 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:04:59.634948 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:04:59.634963 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:04:59.634979 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:04:59.634994 | orchestrator | 2026-02-02 06:04:59.635010 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 06:04:59.635026 | orchestrator | Monday 02 February 2026 06:04:41 +0000 (0:00:02.447) 0:31:08.654 ******* 2026-02-02 06:04:59.635042 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-02 06:04:59.635060 | orchestrator | 2026-02-02 06:04:59.635074 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 06:04:59.635089 | orchestrator | Monday 02 February 2026 06:04:42 +0000 (0:00:01.248) 0:31:09.903 ******* 2026-02-02 06:04:59.635106 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-02 06:04:59.635122 | orchestrator | 2026-02-02 06:04:59.635138 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 06:04:59.635153 | orchestrator | Monday 02 February 2026 06:04:43 +0000 (0:00:01.104) 0:31:11.007 ******* 2026-02-02 06:04:59.635168 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:59.635184 | orchestrator | 2026-02-02 06:04:59.635200 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 06:04:59.635215 | orchestrator | Monday 02 February 2026 06:04:44 +0000 (0:00:01.531) 0:31:12.538 ******* 2026-02-02 06:04:59.635231 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.635247 | orchestrator | 2026-02-02 06:04:59.635264 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 06:04:59.635279 | orchestrator | Monday 02 February 2026 06:04:46 +0000 (0:00:01.116) 0:31:13.655 ******* 2026-02-02 06:04:59.635294 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.635309 | orchestrator | 2026-02-02 06:04:59.635378 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 06:04:59.635393 | orchestrator | Monday 02 February 2026 06:04:47 +0000 (0:00:01.175) 0:31:14.830 ******* 2026-02-02 06:04:59.635408 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.635422 | orchestrator | 2026-02-02 06:04:59.635436 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 06:04:59.635450 | orchestrator | Monday 02 February 2026 06:04:48 +0000 (0:00:01.183) 0:31:16.014 ******* 2026-02-02 06:04:59.635478 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:59.635496 | orchestrator | 2026-02-02 06:04:59.635512 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 06:04:59.635527 | orchestrator | Monday 02 February 2026 06:04:50 +0000 (0:00:01.576) 0:31:17.590 ******* 2026-02-02 06:04:59.635541 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.635557 | orchestrator | 2026-02-02 06:04:59.635573 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 06:04:59.635589 | orchestrator | Monday 02 February 2026 06:04:51 +0000 (0:00:01.222) 0:31:18.813 ******* 2026-02-02 06:04:59.635603 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.635620 | orchestrator | 2026-02-02 06:04:59.635636 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 06:04:59.635653 | orchestrator | Monday 02 February 2026 06:04:52 +0000 (0:00:01.200) 0:31:20.013 ******* 2026-02-02 06:04:59.635669 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:59.635685 | orchestrator | 2026-02-02 06:04:59.635701 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 06:04:59.635715 | orchestrator | Monday 02 February 2026 06:04:54 +0000 (0:00:01.698) 0:31:21.712 ******* 2026-02-02 06:04:59.635732 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:59.635748 | orchestrator | 2026-02-02 06:04:59.635776 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 06:04:59.635792 | orchestrator | Monday 02 February 2026 06:04:55 +0000 (0:00:01.558) 0:31:23.270 ******* 2026-02-02 06:04:59.635809 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.635825 | orchestrator | 2026-02-02 06:04:59.635840 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 06:04:59.635855 | orchestrator | Monday 02 February 2026 06:04:56 +0000 (0:00:00.798) 0:31:24.069 ******* 2026-02-02 06:04:59.635870 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:04:59.635886 | orchestrator | 2026-02-02 06:04:59.635902 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 06:04:59.635916 | orchestrator | Monday 02 February 2026 06:04:57 +0000 (0:00:00.792) 0:31:24.861 ******* 2026-02-02 06:04:59.635931 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.635947 | orchestrator | 2026-02-02 06:04:59.635963 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 06:04:59.635979 | orchestrator | Monday 02 February 2026 06:04:58 +0000 (0:00:00.762) 0:31:25.624 ******* 2026-02-02 06:04:59.635994 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:04:59.636010 | orchestrator | 2026-02-02 06:04:59.636024 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 06:04:59.636039 | orchestrator | Monday 02 February 2026 06:04:58 +0000 (0:00:00.779) 0:31:26.403 ******* 2026-02-02 06:04:59.636069 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.208045 | orchestrator | 2026-02-02 06:05:41.208185 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 06:05:41.208205 | orchestrator | Monday 02 February 2026 06:04:59 +0000 (0:00:00.801) 0:31:27.205 ******* 2026-02-02 06:05:41.208217 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.208230 | orchestrator | 2026-02-02 06:05:41.208242 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 06:05:41.208253 | orchestrator | Monday 02 February 2026 06:05:00 +0000 (0:00:00.822) 0:31:28.027 ******* 2026-02-02 06:05:41.208263 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.208275 | orchestrator | 2026-02-02 06:05:41.208286 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 06:05:41.208296 | orchestrator | Monday 02 February 2026 06:05:01 +0000 (0:00:00.799) 0:31:28.827 ******* 2026-02-02 06:05:41.208384 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:05:41.208397 | orchestrator | 2026-02-02 06:05:41.208408 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 06:05:41.208419 | orchestrator | Monday 02 February 2026 06:05:02 +0000 (0:00:00.885) 0:31:29.712 ******* 2026-02-02 06:05:41.208430 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:05:41.208441 | orchestrator | 2026-02-02 06:05:41.208452 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 06:05:41.208462 | orchestrator | Monday 02 February 2026 06:05:02 +0000 (0:00:00.774) 0:31:30.487 ******* 2026-02-02 06:05:41.208473 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:05:41.208484 | orchestrator | 2026-02-02 06:05:41.208509 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 06:05:41.208521 | orchestrator | Monday 02 February 2026 06:05:03 +0000 (0:00:00.831) 0:31:31.319 ******* 2026-02-02 06:05:41.208532 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.208553 | orchestrator | 2026-02-02 06:05:41.208564 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 06:05:41.208575 | orchestrator | Monday 02 February 2026 06:05:04 +0000 (0:00:00.779) 0:31:32.098 ******* 2026-02-02 06:05:41.208586 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.208597 | orchestrator | 2026-02-02 06:05:41.208608 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 06:05:41.208619 | orchestrator | Monday 02 February 2026 06:05:05 +0000 (0:00:00.824) 0:31:32.923 ******* 2026-02-02 06:05:41.208630 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.208641 | orchestrator | 2026-02-02 06:05:41.208680 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 06:05:41.208692 | orchestrator | Monday 02 February 2026 06:05:06 +0000 (0:00:00.849) 0:31:33.772 ******* 2026-02-02 06:05:41.208702 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.208714 | orchestrator | 2026-02-02 06:05:41.208725 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 06:05:41.208736 | orchestrator | Monday 02 February 2026 06:05:07 +0000 (0:00:00.822) 0:31:34.594 ******* 2026-02-02 06:05:41.208746 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.208757 | orchestrator | 2026-02-02 06:05:41.208768 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 06:05:41.208779 | orchestrator | Monday 02 February 2026 06:05:07 +0000 (0:00:00.815) 0:31:35.410 ******* 2026-02-02 06:05:41.208789 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.208800 | orchestrator | 2026-02-02 06:05:41.208811 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 06:05:41.208836 | orchestrator | Monday 02 February 2026 06:05:08 +0000 (0:00:00.795) 0:31:36.206 ******* 2026-02-02 06:05:41.208847 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.208858 | orchestrator | 2026-02-02 06:05:41.208869 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 06:05:41.208881 | orchestrator | Monday 02 February 2026 06:05:09 +0000 (0:00:00.776) 0:31:36.983 ******* 2026-02-02 06:05:41.208892 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.208902 | orchestrator | 2026-02-02 06:05:41.208913 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 06:05:41.208924 | orchestrator | Monday 02 February 2026 06:05:10 +0000 (0:00:00.879) 0:31:37.862 ******* 2026-02-02 06:05:41.208934 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.208945 | orchestrator | 2026-02-02 06:05:41.208955 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 06:05:41.208966 | orchestrator | Monday 02 February 2026 06:05:11 +0000 (0:00:00.825) 0:31:38.688 ******* 2026-02-02 06:05:41.208976 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.208987 | orchestrator | 2026-02-02 06:05:41.208998 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 06:05:41.209008 | orchestrator | Monday 02 February 2026 06:05:11 +0000 (0:00:00.786) 0:31:39.474 ******* 2026-02-02 06:05:41.209019 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.209029 | orchestrator | 2026-02-02 06:05:41.209040 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 06:05:41.209051 | orchestrator | Monday 02 February 2026 06:05:12 +0000 (0:00:00.804) 0:31:40.279 ******* 2026-02-02 06:05:41.209061 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.209072 | orchestrator | 2026-02-02 06:05:41.209082 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 06:05:41.209093 | orchestrator | Monday 02 February 2026 06:05:13 +0000 (0:00:00.749) 0:31:41.028 ******* 2026-02-02 06:05:41.209103 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:05:41.209114 | orchestrator | 2026-02-02 06:05:41.209124 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 06:05:41.209135 | orchestrator | Monday 02 February 2026 06:05:15 +0000 (0:00:01.633) 0:31:42.662 ******* 2026-02-02 06:05:41.209146 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:05:41.209156 | orchestrator | 2026-02-02 06:05:41.209167 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 06:05:41.209177 | orchestrator | Monday 02 February 2026 06:05:17 +0000 (0:00:02.046) 0:31:44.709 ******* 2026-02-02 06:05:41.209188 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-02 06:05:41.209200 | orchestrator | 2026-02-02 06:05:41.209230 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 06:05:41.209241 | orchestrator | Monday 02 February 2026 06:05:18 +0000 (0:00:01.472) 0:31:46.181 ******* 2026-02-02 06:05:41.209260 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.209271 | orchestrator | 2026-02-02 06:05:41.209282 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 06:05:41.209292 | orchestrator | Monday 02 February 2026 06:05:19 +0000 (0:00:01.164) 0:31:47.346 ******* 2026-02-02 06:05:41.209334 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.209348 | orchestrator | 2026-02-02 06:05:41.209359 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 06:05:41.209369 | orchestrator | Monday 02 February 2026 06:05:20 +0000 (0:00:01.140) 0:31:48.486 ******* 2026-02-02 06:05:41.209380 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 06:05:41.209391 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 06:05:41.209402 | orchestrator | 2026-02-02 06:05:41.209413 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 06:05:41.209423 | orchestrator | Monday 02 February 2026 06:05:23 +0000 (0:00:02.124) 0:31:50.610 ******* 2026-02-02 06:05:41.209434 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:05:41.209445 | orchestrator | 2026-02-02 06:05:41.209456 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 06:05:41.209466 | orchestrator | Monday 02 February 2026 06:05:24 +0000 (0:00:01.478) 0:31:52.089 ******* 2026-02-02 06:05:41.209477 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.209488 | orchestrator | 2026-02-02 06:05:41.209498 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 06:05:41.209509 | orchestrator | Monday 02 February 2026 06:05:25 +0000 (0:00:01.141) 0:31:53.231 ******* 2026-02-02 06:05:41.209520 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.209530 | orchestrator | 2026-02-02 06:05:41.209541 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 06:05:41.209552 | orchestrator | Monday 02 February 2026 06:05:26 +0000 (0:00:00.756) 0:31:53.988 ******* 2026-02-02 06:05:41.209563 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.209573 | orchestrator | 2026-02-02 06:05:41.209590 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 06:05:41.209607 | orchestrator | Monday 02 February 2026 06:05:27 +0000 (0:00:00.849) 0:31:54.838 ******* 2026-02-02 06:05:41.209626 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-02 06:05:41.209644 | orchestrator | 2026-02-02 06:05:41.209662 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 06:05:41.209682 | orchestrator | Monday 02 February 2026 06:05:28 +0000 (0:00:01.197) 0:31:56.036 ******* 2026-02-02 06:05:41.209701 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:05:41.209720 | orchestrator | 2026-02-02 06:05:41.209739 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 06:05:41.209757 | orchestrator | Monday 02 February 2026 06:05:30 +0000 (0:00:01.866) 0:31:57.902 ******* 2026-02-02 06:05:41.209774 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 06:05:41.209793 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 06:05:41.209804 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 06:05:41.209815 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.209826 | orchestrator | 2026-02-02 06:05:41.209836 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 06:05:41.209847 | orchestrator | Monday 02 February 2026 06:05:31 +0000 (0:00:01.146) 0:31:59.049 ******* 2026-02-02 06:05:41.209858 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.209869 | orchestrator | 2026-02-02 06:05:41.209879 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 06:05:41.209890 | orchestrator | Monday 02 February 2026 06:05:32 +0000 (0:00:01.106) 0:32:00.156 ******* 2026-02-02 06:05:41.209903 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.209934 | orchestrator | 2026-02-02 06:05:41.209952 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 06:05:41.209970 | orchestrator | Monday 02 February 2026 06:05:33 +0000 (0:00:01.395) 0:32:01.552 ******* 2026-02-02 06:05:41.209981 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.209992 | orchestrator | 2026-02-02 06:05:41.210003 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 06:05:41.210013 | orchestrator | Monday 02 February 2026 06:05:35 +0000 (0:00:01.156) 0:32:02.708 ******* 2026-02-02 06:05:41.210089 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.210101 | orchestrator | 2026-02-02 06:05:41.210112 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 06:05:41.210122 | orchestrator | Monday 02 February 2026 06:05:36 +0000 (0:00:01.159) 0:32:03.868 ******* 2026-02-02 06:05:41.210133 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:05:41.210143 | orchestrator | 2026-02-02 06:05:41.210154 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 06:05:41.210165 | orchestrator | Monday 02 February 2026 06:05:37 +0000 (0:00:00.782) 0:32:04.651 ******* 2026-02-02 06:05:41.210175 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:05:41.210186 | orchestrator | 2026-02-02 06:05:41.210197 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 06:05:41.210208 | orchestrator | Monday 02 February 2026 06:05:39 +0000 (0:00:02.227) 0:32:06.879 ******* 2026-02-02 06:05:41.210219 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:05:41.210229 | orchestrator | 2026-02-02 06:05:41.210240 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 06:05:41.210251 | orchestrator | Monday 02 February 2026 06:05:40 +0000 (0:00:00.779) 0:32:07.658 ******* 2026-02-02 06:05:41.210262 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-02 06:05:41.210272 | orchestrator | 2026-02-02 06:05:41.210293 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 06:06:18.761090 | orchestrator | Monday 02 February 2026 06:05:41 +0000 (0:00:01.117) 0:32:08.776 ******* 2026-02-02 06:06:18.761203 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.761220 | orchestrator | 2026-02-02 06:06:18.761234 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 06:06:18.761245 | orchestrator | Monday 02 February 2026 06:05:42 +0000 (0:00:01.148) 0:32:09.925 ******* 2026-02-02 06:06:18.761257 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.761268 | orchestrator | 2026-02-02 06:06:18.761279 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 06:06:18.761333 | orchestrator | Monday 02 February 2026 06:05:43 +0000 (0:00:01.200) 0:32:11.126 ******* 2026-02-02 06:06:18.761346 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.761357 | orchestrator | 2026-02-02 06:06:18.761368 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 06:06:18.761379 | orchestrator | Monday 02 February 2026 06:05:44 +0000 (0:00:01.161) 0:32:12.287 ******* 2026-02-02 06:06:18.761390 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.761401 | orchestrator | 2026-02-02 06:06:18.761413 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 06:06:18.761424 | orchestrator | Monday 02 February 2026 06:05:45 +0000 (0:00:01.169) 0:32:13.457 ******* 2026-02-02 06:06:18.761435 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.761445 | orchestrator | 2026-02-02 06:06:18.761456 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 06:06:18.761467 | orchestrator | Monday 02 February 2026 06:05:47 +0000 (0:00:01.216) 0:32:14.674 ******* 2026-02-02 06:06:18.761478 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.761489 | orchestrator | 2026-02-02 06:06:18.761500 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 06:06:18.761511 | orchestrator | Monday 02 February 2026 06:05:48 +0000 (0:00:01.194) 0:32:15.868 ******* 2026-02-02 06:06:18.761545 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.761556 | orchestrator | 2026-02-02 06:06:18.761567 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 06:06:18.761578 | orchestrator | Monday 02 February 2026 06:05:49 +0000 (0:00:01.196) 0:32:17.065 ******* 2026-02-02 06:06:18.761589 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.761600 | orchestrator | 2026-02-02 06:06:18.761610 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 06:06:18.761621 | orchestrator | Monday 02 February 2026 06:05:50 +0000 (0:00:01.260) 0:32:18.326 ******* 2026-02-02 06:06:18.761635 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:06:18.761648 | orchestrator | 2026-02-02 06:06:18.761661 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 06:06:18.761674 | orchestrator | Monday 02 February 2026 06:05:51 +0000 (0:00:00.789) 0:32:19.115 ******* 2026-02-02 06:06:18.761687 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-02 06:06:18.761702 | orchestrator | 2026-02-02 06:06:18.761715 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 06:06:18.761728 | orchestrator | Monday 02 February 2026 06:05:52 +0000 (0:00:01.108) 0:32:20.223 ******* 2026-02-02 06:06:18.761741 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-02 06:06:18.761769 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-02 06:06:18.761782 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-02 06:06:18.761795 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-02 06:06:18.761808 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-02 06:06:18.761821 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-02 06:06:18.761834 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-02 06:06:18.761846 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-02 06:06:18.761859 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 06:06:18.761872 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 06:06:18.761884 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 06:06:18.761897 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 06:06:18.761910 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 06:06:18.761922 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 06:06:18.761935 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-02 06:06:18.761947 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-02 06:06:18.761961 | orchestrator | 2026-02-02 06:06:18.761973 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 06:06:18.761986 | orchestrator | Monday 02 February 2026 06:05:59 +0000 (0:00:06.601) 0:32:26.825 ******* 2026-02-02 06:06:18.761997 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762008 | orchestrator | 2026-02-02 06:06:18.762072 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 06:06:18.762086 | orchestrator | Monday 02 February 2026 06:06:00 +0000 (0:00:00.824) 0:32:27.649 ******* 2026-02-02 06:06:18.762097 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762108 | orchestrator | 2026-02-02 06:06:18.762119 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 06:06:18.762130 | orchestrator | Monday 02 February 2026 06:06:00 +0000 (0:00:00.762) 0:32:28.412 ******* 2026-02-02 06:06:18.762140 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762151 | orchestrator | 2026-02-02 06:06:18.762171 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 06:06:18.762182 | orchestrator | Monday 02 February 2026 06:06:01 +0000 (0:00:00.786) 0:32:29.199 ******* 2026-02-02 06:06:18.762193 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762212 | orchestrator | 2026-02-02 06:06:18.762223 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 06:06:18.762251 | orchestrator | Monday 02 February 2026 06:06:02 +0000 (0:00:00.776) 0:32:29.975 ******* 2026-02-02 06:06:18.762263 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762273 | orchestrator | 2026-02-02 06:06:18.762284 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 06:06:18.762368 | orchestrator | Monday 02 February 2026 06:06:03 +0000 (0:00:00.800) 0:32:30.776 ******* 2026-02-02 06:06:18.762381 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762392 | orchestrator | 2026-02-02 06:06:18.762403 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 06:06:18.762413 | orchestrator | Monday 02 February 2026 06:06:04 +0000 (0:00:01.033) 0:32:31.809 ******* 2026-02-02 06:06:18.762424 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762435 | orchestrator | 2026-02-02 06:06:18.762446 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 06:06:18.762457 | orchestrator | Monday 02 February 2026 06:06:05 +0000 (0:00:00.833) 0:32:32.642 ******* 2026-02-02 06:06:18.762467 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762478 | orchestrator | 2026-02-02 06:06:18.762489 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 06:06:18.762500 | orchestrator | Monday 02 February 2026 06:06:05 +0000 (0:00:00.769) 0:32:33.411 ******* 2026-02-02 06:06:18.762510 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762521 | orchestrator | 2026-02-02 06:06:18.762532 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 06:06:18.762542 | orchestrator | Monday 02 February 2026 06:06:06 +0000 (0:00:00.816) 0:32:34.228 ******* 2026-02-02 06:06:18.762553 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762564 | orchestrator | 2026-02-02 06:06:18.762574 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 06:06:18.762585 | orchestrator | Monday 02 February 2026 06:06:07 +0000 (0:00:00.794) 0:32:35.023 ******* 2026-02-02 06:06:18.762596 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762607 | orchestrator | 2026-02-02 06:06:18.762617 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 06:06:18.762628 | orchestrator | Monday 02 February 2026 06:06:08 +0000 (0:00:00.805) 0:32:35.828 ******* 2026-02-02 06:06:18.762639 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762649 | orchestrator | 2026-02-02 06:06:18.762660 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 06:06:18.762671 | orchestrator | Monday 02 February 2026 06:06:09 +0000 (0:00:00.814) 0:32:36.643 ******* 2026-02-02 06:06:18.762681 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762692 | orchestrator | 2026-02-02 06:06:18.762703 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 06:06:18.762713 | orchestrator | Monday 02 February 2026 06:06:09 +0000 (0:00:00.882) 0:32:37.526 ******* 2026-02-02 06:06:18.762724 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762735 | orchestrator | 2026-02-02 06:06:18.762745 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 06:06:18.762756 | orchestrator | Monday 02 February 2026 06:06:10 +0000 (0:00:00.794) 0:32:38.321 ******* 2026-02-02 06:06:18.762767 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762777 | orchestrator | 2026-02-02 06:06:18.762795 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 06:06:18.762806 | orchestrator | Monday 02 February 2026 06:06:11 +0000 (0:00:00.880) 0:32:39.201 ******* 2026-02-02 06:06:18.762816 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762827 | orchestrator | 2026-02-02 06:06:18.762838 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 06:06:18.762849 | orchestrator | Monday 02 February 2026 06:06:12 +0000 (0:00:00.769) 0:32:39.971 ******* 2026-02-02 06:06:18.762867 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762878 | orchestrator | 2026-02-02 06:06:18.762889 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:06:18.762901 | orchestrator | Monday 02 February 2026 06:06:13 +0000 (0:00:00.811) 0:32:40.783 ******* 2026-02-02 06:06:18.762912 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762923 | orchestrator | 2026-02-02 06:06:18.762933 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:06:18.762944 | orchestrator | Monday 02 February 2026 06:06:14 +0000 (0:00:00.836) 0:32:41.619 ******* 2026-02-02 06:06:18.762955 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.762966 | orchestrator | 2026-02-02 06:06:18.762977 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:06:18.762988 | orchestrator | Monday 02 February 2026 06:06:15 +0000 (0:00:00.981) 0:32:42.601 ******* 2026-02-02 06:06:18.762998 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.763009 | orchestrator | 2026-02-02 06:06:18.763020 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:06:18.763030 | orchestrator | Monday 02 February 2026 06:06:15 +0000 (0:00:00.773) 0:32:43.374 ******* 2026-02-02 06:06:18.763041 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.763052 | orchestrator | 2026-02-02 06:06:18.763062 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:06:18.763073 | orchestrator | Monday 02 February 2026 06:06:16 +0000 (0:00:00.758) 0:32:44.133 ******* 2026-02-02 06:06:18.763084 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-02 06:06:18.763094 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-02 06:06:18.763105 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-02 06:06:18.763116 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:06:18.763126 | orchestrator | 2026-02-02 06:06:18.763137 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:06:18.763148 | orchestrator | Monday 02 February 2026 06:06:17 +0000 (0:00:01.128) 0:32:45.261 ******* 2026-02-02 06:06:18.763158 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-02 06:06:18.763176 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-02 06:07:15.865473 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-02 06:07:15.865594 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:07:15.865612 | orchestrator | 2026-02-02 06:07:15.865626 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:07:15.865639 | orchestrator | Monday 02 February 2026 06:06:18 +0000 (0:00:01.069) 0:32:46.330 ******* 2026-02-02 06:07:15.865650 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-02 06:07:15.865661 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-02 06:07:15.865672 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-02 06:07:15.865683 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:07:15.865694 | orchestrator | 2026-02-02 06:07:15.865705 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:07:15.865716 | orchestrator | Monday 02 February 2026 06:06:19 +0000 (0:00:01.067) 0:32:47.398 ******* 2026-02-02 06:07:15.865727 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:07:15.865738 | orchestrator | 2026-02-02 06:07:15.865749 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:07:15.865760 | orchestrator | Monday 02 February 2026 06:06:20 +0000 (0:00:00.792) 0:32:48.190 ******* 2026-02-02 06:07:15.865771 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-02 06:07:15.865782 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:07:15.865793 | orchestrator | 2026-02-02 06:07:15.865804 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 06:07:15.865840 | orchestrator | Monday 02 February 2026 06:06:21 +0000 (0:00:01.044) 0:32:49.235 ******* 2026-02-02 06:07:15.865857 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:07:15.865876 | orchestrator | 2026-02-02 06:07:15.865896 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-02 06:07:15.865910 | orchestrator | Monday 02 February 2026 06:06:23 +0000 (0:00:01.461) 0:32:50.696 ******* 2026-02-02 06:07:15.865921 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:07:15.865933 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:07:15.865943 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-02 06:07:15.865954 | orchestrator | 2026-02-02 06:07:15.865966 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-02 06:07:15.865977 | orchestrator | Monday 02 February 2026 06:06:24 +0000 (0:00:01.647) 0:32:52.343 ******* 2026-02-02 06:07:15.865988 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-02-02 06:07:15.866001 | orchestrator | 2026-02-02 06:07:15.866079 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-02 06:07:15.866096 | orchestrator | Monday 02 February 2026 06:06:25 +0000 (0:00:01.160) 0:32:53.504 ******* 2026-02-02 06:07:15.866108 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:07:15.866121 | orchestrator | 2026-02-02 06:07:15.866134 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-02 06:07:15.866147 | orchestrator | Monday 02 February 2026 06:06:27 +0000 (0:00:01.602) 0:32:55.107 ******* 2026-02-02 06:07:15.866161 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:07:15.866188 | orchestrator | 2026-02-02 06:07:15.866202 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-02 06:07:15.866215 | orchestrator | Monday 02 February 2026 06:06:28 +0000 (0:00:01.130) 0:32:56.237 ******* 2026-02-02 06:07:15.866227 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:07:15.866240 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:07:15.866253 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:07:15.866266 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-02-02 06:07:15.866300 | orchestrator | 2026-02-02 06:07:15.866313 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-02 06:07:15.866323 | orchestrator | Monday 02 February 2026 06:06:35 +0000 (0:00:07.137) 0:33:03.375 ******* 2026-02-02 06:07:15.866334 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:07:15.866345 | orchestrator | 2026-02-02 06:07:15.866355 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-02 06:07:15.866366 | orchestrator | Monday 02 February 2026 06:06:36 +0000 (0:00:01.160) 0:33:04.536 ******* 2026-02-02 06:07:15.866377 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-02 06:07:15.866388 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-02 06:07:15.866399 | orchestrator | 2026-02-02 06:07:15.866409 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:07:15.866420 | orchestrator | Monday 02 February 2026 06:06:40 +0000 (0:00:03.178) 0:33:07.714 ******* 2026-02-02 06:07:15.866431 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-02 06:07:15.866441 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-02 06:07:15.866453 | orchestrator | 2026-02-02 06:07:15.866464 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-02 06:07:15.866475 | orchestrator | Monday 02 February 2026 06:06:42 +0000 (0:00:02.027) 0:33:09.742 ******* 2026-02-02 06:07:15.866485 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:07:15.866496 | orchestrator | 2026-02-02 06:07:15.866516 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-02 06:07:15.866535 | orchestrator | Monday 02 February 2026 06:06:43 +0000 (0:00:01.551) 0:33:11.294 ******* 2026-02-02 06:07:15.866563 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:07:15.866574 | orchestrator | 2026-02-02 06:07:15.866585 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-02 06:07:15.866596 | orchestrator | Monday 02 February 2026 06:06:44 +0000 (0:00:00.783) 0:33:12.077 ******* 2026-02-02 06:07:15.866606 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:07:15.866617 | orchestrator | 2026-02-02 06:07:15.866628 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-02 06:07:15.866659 | orchestrator | Monday 02 February 2026 06:06:45 +0000 (0:00:00.759) 0:33:12.837 ******* 2026-02-02 06:07:15.866670 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-02-02 06:07:15.866681 | orchestrator | 2026-02-02 06:07:15.866692 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-02 06:07:15.866702 | orchestrator | Monday 02 February 2026 06:06:46 +0000 (0:00:01.114) 0:33:13.952 ******* 2026-02-02 06:07:15.866713 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:07:15.866723 | orchestrator | 2026-02-02 06:07:15.866734 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-02 06:07:15.866745 | orchestrator | Monday 02 February 2026 06:06:47 +0000 (0:00:01.145) 0:33:15.097 ******* 2026-02-02 06:07:15.866755 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:07:15.866766 | orchestrator | 2026-02-02 06:07:15.866777 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-02 06:07:15.866787 | orchestrator | Monday 02 February 2026 06:06:48 +0000 (0:00:01.129) 0:33:16.227 ******* 2026-02-02 06:07:15.866798 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-02-02 06:07:15.866808 | orchestrator | 2026-02-02 06:07:15.866819 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-02 06:07:15.866830 | orchestrator | Monday 02 February 2026 06:06:49 +0000 (0:00:01.253) 0:33:17.480 ******* 2026-02-02 06:07:15.866840 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:07:15.866851 | orchestrator | 2026-02-02 06:07:15.866861 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-02 06:07:15.866872 | orchestrator | Monday 02 February 2026 06:06:52 +0000 (0:00:02.169) 0:33:19.650 ******* 2026-02-02 06:07:15.866883 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:07:15.866893 | orchestrator | 2026-02-02 06:07:15.866904 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-02 06:07:15.866914 | orchestrator | Monday 02 February 2026 06:06:54 +0000 (0:00:01.998) 0:33:21.649 ******* 2026-02-02 06:07:15.866925 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:07:15.866936 | orchestrator | 2026-02-02 06:07:15.866947 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-02 06:07:15.866957 | orchestrator | Monday 02 February 2026 06:06:56 +0000 (0:00:02.444) 0:33:24.093 ******* 2026-02-02 06:07:15.866968 | orchestrator | changed: [testbed-node-2] 2026-02-02 06:07:15.866979 | orchestrator | 2026-02-02 06:07:15.866989 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-02 06:07:15.867000 | orchestrator | Monday 02 February 2026 06:07:00 +0000 (0:00:03.492) 0:33:27.586 ******* 2026-02-02 06:07:15.867016 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-02 06:07:15.867034 | orchestrator | 2026-02-02 06:07:15.867053 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-02 06:07:15.867073 | orchestrator | Monday 02 February 2026 06:07:01 +0000 (0:00:01.560) 0:33:29.146 ******* 2026-02-02 06:07:15.867092 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:07:15.867110 | orchestrator | 2026-02-02 06:07:15.867124 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-02 06:07:15.867141 | orchestrator | Monday 02 February 2026 06:07:03 +0000 (0:00:02.390) 0:33:31.537 ******* 2026-02-02 06:07:15.867152 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:07:15.867163 | orchestrator | 2026-02-02 06:07:15.867174 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-02 06:07:15.867192 | orchestrator | Monday 02 February 2026 06:07:06 +0000 (0:00:02.332) 0:33:33.869 ******* 2026-02-02 06:07:15.867203 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:07:15.867214 | orchestrator | 2026-02-02 06:07:15.867225 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-02 06:07:15.867235 | orchestrator | Monday 02 February 2026 06:07:07 +0000 (0:00:01.330) 0:33:35.199 ******* 2026-02-02 06:07:15.867246 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:07:15.867256 | orchestrator | 2026-02-02 06:07:15.867267 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-02 06:07:15.867319 | orchestrator | Monday 02 February 2026 06:07:08 +0000 (0:00:01.151) 0:33:36.351 ******* 2026-02-02 06:07:15.867333 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-02 06:07:15.867344 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-02 06:07:15.867355 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:07:15.867366 | orchestrator | 2026-02-02 06:07:15.867376 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-02 06:07:15.867387 | orchestrator | Monday 02 February 2026 06:07:10 +0000 (0:00:01.758) 0:33:38.109 ******* 2026-02-02 06:07:15.867398 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-02 06:07:15.867408 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-02 06:07:15.867419 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-02 06:07:15.867430 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-02 06:07:15.867443 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:07:15.867462 | orchestrator | 2026-02-02 06:07:15.867488 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-02-02 06:07:15.867512 | orchestrator | 2026-02-02 06:07:15.867530 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:07:15.867548 | orchestrator | Monday 02 February 2026 06:07:12 +0000 (0:00:01.964) 0:33:40.073 ******* 2026-02-02 06:07:15.867567 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:07:15.867586 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:07:15.867604 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:07:15.867621 | orchestrator | 2026-02-02 06:07:15.867638 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:07:15.867655 | orchestrator | Monday 02 February 2026 06:07:14 +0000 (0:00:01.666) 0:33:41.740 ******* 2026-02-02 06:07:15.867675 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:07:15.867693 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:07:15.867711 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:07:15.867731 | orchestrator | 2026-02-02 06:07:15.867767 | orchestrator | TASK [Get pool list] *********************************************************** 2026-02-02 06:07:22.815827 | orchestrator | Monday 02 February 2026 06:07:15 +0000 (0:00:01.691) 0:33:43.431 ******* 2026-02-02 06:07:22.815960 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:07:22.815983 | orchestrator | 2026-02-02 06:07:22.816002 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-02-02 06:07:22.816020 | orchestrator | Monday 02 February 2026 06:07:19 +0000 (0:00:03.359) 0:33:46.791 ******* 2026-02-02 06:07:22.816036 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:07:22.816053 | orchestrator | 2026-02-02 06:07:22.816070 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-02-02 06:07:22.816087 | orchestrator | Monday 02 February 2026 06:07:22 +0000 (0:00:03.021) 0:33:49.813 ******* 2026-02-02 06:07:22.816129 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-02-02T03:33:44.562673+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:07:22.816193 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-02-02T03:34:58.574972+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '32', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:07:22.816213 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-02-02T03:35:02.208791+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '74', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:07:22.816240 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-02-02T03:36:00.476644+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '82', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '76', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:07:23.261862 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-02-02T03:36:06.798795+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '82', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '76', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:07:23.262089 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-02-02T03:36:12.616001+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '82', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '78', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:07:23.262154 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-02-02T03:36:17.687476+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '207', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '78', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:07:23.262190 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-02-02T03:36:22.883703+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '82', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '80', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:07:23.262236 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-02-02T03:36:34.543251+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '82', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '80', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:07:23.955826 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-02-02T03:37:21.466569+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '108', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 108, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:07:23.955951 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-02-02T03:37:30.003810+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '116', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 116, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.1299999952316284, 'score_stable': 1.1299999952316284, 'optimal_score': 1, 'raw_score_acting': 1.1299999952316284, 'raw_score_stable': 1.1299999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:07:23.956048 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-02-02T03:37:38.673999+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '217', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 217, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:07:23.956176 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-02-02T03:37:47.687559+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '132', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 132, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:07:23.956235 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-02-02T03:37:55.918337+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '138', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 138, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-02 06:09:14.084433 | orchestrator | 2026-02-02 06:09:14.084548 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-02-02 06:09:14.084591 | orchestrator | Monday 02 February 2026 06:07:25 +0000 (0:00:03.058) 0:33:52.871 ******* 2026-02-02 06:09:14.084605 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:09:14.084617 | orchestrator | 2026-02-02 06:09:14.084628 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-02-02 06:09:14.084640 | orchestrator | Monday 02 February 2026 06:07:28 +0000 (0:00:03.325) 0:33:56.196 ******* 2026-02-02 06:09:14.084651 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-02 06:09:14.084664 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-02 06:09:14.084676 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-02 06:09:14.084688 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-02 06:09:14.084700 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-02 06:09:14.084711 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-02 06:09:14.084723 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-02 06:09:14.084734 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-02 06:09:14.084746 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-02 06:09:14.084757 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-02 06:09:14.084768 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-02 06:09:14.084780 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-02 06:09:14.084806 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-02 06:09:14.084818 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-02 06:09:14.084829 | orchestrator | 2026-02-02 06:09:14.084840 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-02-02 06:09:14.084851 | orchestrator | Monday 02 February 2026 06:08:45 +0000 (0:01:16.419) 0:35:12.615 ******* 2026-02-02 06:09:14.084861 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-02 06:09:14.084872 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-02 06:09:14.084883 | orchestrator | 2026-02-02 06:09:14.084894 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-02 06:09:14.084904 | orchestrator | 2026-02-02 06:09:14.084915 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 06:09:14.084926 | orchestrator | Monday 02 February 2026 06:08:50 +0000 (0:00:05.939) 0:35:18.555 ******* 2026-02-02 06:09:14.084936 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-02 06:09:14.084947 | orchestrator | 2026-02-02 06:09:14.084958 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 06:09:14.084972 | orchestrator | Monday 02 February 2026 06:08:52 +0000 (0:00:01.176) 0:35:19.732 ******* 2026-02-02 06:09:14.084985 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:14.084999 | orchestrator | 2026-02-02 06:09:14.085013 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 06:09:14.085026 | orchestrator | Monday 02 February 2026 06:08:53 +0000 (0:00:01.439) 0:35:21.171 ******* 2026-02-02 06:09:14.085039 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:14.085053 | orchestrator | 2026-02-02 06:09:14.085066 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:09:14.085086 | orchestrator | Monday 02 February 2026 06:08:54 +0000 (0:00:01.179) 0:35:22.350 ******* 2026-02-02 06:09:14.085100 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:14.085113 | orchestrator | 2026-02-02 06:09:14.085125 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:09:14.085138 | orchestrator | Monday 02 February 2026 06:08:56 +0000 (0:00:01.541) 0:35:23.892 ******* 2026-02-02 06:09:14.085151 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:14.085164 | orchestrator | 2026-02-02 06:09:14.085194 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 06:09:14.085218 | orchestrator | Monday 02 February 2026 06:08:57 +0000 (0:00:01.133) 0:35:25.026 ******* 2026-02-02 06:09:14.085231 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:14.085244 | orchestrator | 2026-02-02 06:09:14.085283 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 06:09:14.085298 | orchestrator | Monday 02 February 2026 06:08:58 +0000 (0:00:01.154) 0:35:26.181 ******* 2026-02-02 06:09:14.085311 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:14.085323 | orchestrator | 2026-02-02 06:09:14.085334 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 06:09:14.085345 | orchestrator | Monday 02 February 2026 06:08:59 +0000 (0:00:01.202) 0:35:27.383 ******* 2026-02-02 06:09:14.085356 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:14.085367 | orchestrator | 2026-02-02 06:09:14.085378 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 06:09:14.085408 | orchestrator | Monday 02 February 2026 06:09:00 +0000 (0:00:01.144) 0:35:28.528 ******* 2026-02-02 06:09:14.085419 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:14.085430 | orchestrator | 2026-02-02 06:09:14.085441 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 06:09:14.085452 | orchestrator | Monday 02 February 2026 06:09:02 +0000 (0:00:01.157) 0:35:29.685 ******* 2026-02-02 06:09:14.085462 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:09:14.085473 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:09:14.085484 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:09:14.085494 | orchestrator | 2026-02-02 06:09:14.085505 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 06:09:14.085516 | orchestrator | Monday 02 February 2026 06:09:03 +0000 (0:00:01.682) 0:35:31.368 ******* 2026-02-02 06:09:14.085526 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:14.085537 | orchestrator | 2026-02-02 06:09:14.085547 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 06:09:14.085558 | orchestrator | Monday 02 February 2026 06:09:05 +0000 (0:00:01.228) 0:35:32.596 ******* 2026-02-02 06:09:14.085569 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:09:14.085579 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:09:14.085590 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:09:14.085601 | orchestrator | 2026-02-02 06:09:14.085611 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 06:09:14.085622 | orchestrator | Monday 02 February 2026 06:09:08 +0000 (0:00:03.200) 0:35:35.797 ******* 2026-02-02 06:09:14.085633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 06:09:14.085643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 06:09:14.085654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 06:09:14.085665 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:14.085675 | orchestrator | 2026-02-02 06:09:14.085686 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 06:09:14.085697 | orchestrator | Monday 02 February 2026 06:09:09 +0000 (0:00:01.446) 0:35:37.243 ******* 2026-02-02 06:09:14.085722 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 06:09:14.085736 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 06:09:14.085747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 06:09:14.085758 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:14.085769 | orchestrator | 2026-02-02 06:09:14.085780 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 06:09:14.085790 | orchestrator | Monday 02 February 2026 06:09:11 +0000 (0:00:02.059) 0:35:39.303 ******* 2026-02-02 06:09:14.085803 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:14.085818 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:14.085829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:14.085840 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:14.085851 | orchestrator | 2026-02-02 06:09:14.085862 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 06:09:14.085873 | orchestrator | Monday 02 February 2026 06:09:12 +0000 (0:00:01.145) 0:35:40.449 ******* 2026-02-02 06:09:14.085893 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 06:09:05.547212', 'end': '2026-02-02 06:09:05.592403', 'delta': '0:00:00.045191', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 06:09:32.618753 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c530967d0aad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 06:09:06.112178', 'end': '2026-02-02 06:09:06.161964', 'delta': '0:00:00.049786', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c530967d0aad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 06:09:32.618944 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a68c96a70534', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 06:09:07.006250', 'end': '2026-02-02 06:09:07.049389', 'delta': '0:00:00.043139', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a68c96a70534'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 06:09:32.618976 | orchestrator | 2026-02-02 06:09:32.618996 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 06:09:32.619008 | orchestrator | Monday 02 February 2026 06:09:14 +0000 (0:00:01.205) 0:35:41.654 ******* 2026-02-02 06:09:32.619020 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:32.619032 | orchestrator | 2026-02-02 06:09:32.619043 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 06:09:32.619053 | orchestrator | Monday 02 February 2026 06:09:15 +0000 (0:00:01.817) 0:35:43.472 ******* 2026-02-02 06:09:32.619064 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:32.619076 | orchestrator | 2026-02-02 06:09:32.619095 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 06:09:32.619122 | orchestrator | Monday 02 February 2026 06:09:17 +0000 (0:00:01.242) 0:35:44.715 ******* 2026-02-02 06:09:32.619144 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:32.619162 | orchestrator | 2026-02-02 06:09:32.619180 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 06:09:32.619197 | orchestrator | Monday 02 February 2026 06:09:18 +0000 (0:00:01.149) 0:35:45.865 ******* 2026-02-02 06:09:32.619215 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:09:32.619233 | orchestrator | 2026-02-02 06:09:32.619285 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:09:32.619306 | orchestrator | Monday 02 February 2026 06:09:20 +0000 (0:00:02.015) 0:35:47.880 ******* 2026-02-02 06:09:32.619322 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:32.619334 | orchestrator | 2026-02-02 06:09:32.619347 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 06:09:32.619360 | orchestrator | Monday 02 February 2026 06:09:21 +0000 (0:00:01.223) 0:35:49.104 ******* 2026-02-02 06:09:32.619373 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:32.619386 | orchestrator | 2026-02-02 06:09:32.619399 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 06:09:32.619412 | orchestrator | Monday 02 February 2026 06:09:22 +0000 (0:00:01.109) 0:35:50.213 ******* 2026-02-02 06:09:32.619425 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:32.619439 | orchestrator | 2026-02-02 06:09:32.619451 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:09:32.619464 | orchestrator | Monday 02 February 2026 06:09:23 +0000 (0:00:01.206) 0:35:51.419 ******* 2026-02-02 06:09:32.619476 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:32.619488 | orchestrator | 2026-02-02 06:09:32.619502 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 06:09:32.619514 | orchestrator | Monday 02 February 2026 06:09:24 +0000 (0:00:01.120) 0:35:52.539 ******* 2026-02-02 06:09:32.619526 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:32.619539 | orchestrator | 2026-02-02 06:09:32.619552 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 06:09:32.619577 | orchestrator | Monday 02 February 2026 06:09:26 +0000 (0:00:01.138) 0:35:53.678 ******* 2026-02-02 06:09:32.619590 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:32.619603 | orchestrator | 2026-02-02 06:09:32.619616 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 06:09:32.619629 | orchestrator | Monday 02 February 2026 06:09:27 +0000 (0:00:01.219) 0:35:54.898 ******* 2026-02-02 06:09:32.619642 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:32.619653 | orchestrator | 2026-02-02 06:09:32.619664 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 06:09:32.619675 | orchestrator | Monday 02 February 2026 06:09:28 +0000 (0:00:01.169) 0:35:56.068 ******* 2026-02-02 06:09:32.619712 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:32.619723 | orchestrator | 2026-02-02 06:09:32.619734 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 06:09:32.619745 | orchestrator | Monday 02 February 2026 06:09:29 +0000 (0:00:01.204) 0:35:57.273 ******* 2026-02-02 06:09:32.619776 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:32.619789 | orchestrator | 2026-02-02 06:09:32.619809 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 06:09:32.619830 | orchestrator | Monday 02 February 2026 06:09:30 +0000 (0:00:01.131) 0:35:58.404 ******* 2026-02-02 06:09:32.619850 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:32.619869 | orchestrator | 2026-02-02 06:09:32.619888 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 06:09:32.619908 | orchestrator | Monday 02 February 2026 06:09:31 +0000 (0:00:01.175) 0:35:59.580 ******* 2026-02-02 06:09:32.619930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:09:32.619966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a', 'dm-uuid-LVM-nQNI9mGSypmWJN7Kribh0RNL5qLQKFSceYxT4mfzBYfoYiha3ZzoEdYR0rTnnIvK'], 'uuids': ['a78e3f4b-723a-42a3-abd4-4d699a55c416'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK']}})  2026-02-02 06:09:32.619988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6', 'scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c15f901f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:09:32.620011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HOxmXw-N5cX-V1Nz-Lu3r-OQk9-N5gG-1syyTi', 'scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4', 'scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379']}})  2026-02-02 06:09:32.620048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:09:32.620070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:09:32.620094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 06:09:34.126523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:09:34.126647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO', 'dm-uuid-CRYPT-LUKS2-8edeb25f170042ba8e6d0505727d2968-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:09:34.126669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:09:34.126682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379', 'dm-uuid-LVM-2Xx1rXy8ZvvzVeymXUM2Y23jmTeKUn30gyH8a84MHrJn7bcz7phSu8LEA3bm3DqO'], 'uuids': ['8edeb25f-1700-42ba-8e6d-0505727d2968'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO']}})  2026-02-02 06:09:34.126717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yf6lEa-f3nO-iewk-DEDy-Fb6j-Kq2P-dbkgMf', 'scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc', 'scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a']}})  2026-02-02 06:09:34.126729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:09:34.126774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2944b273', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:09:34.126789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:09:34.126801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:09:34.126819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK', 'dm-uuid-CRYPT-LUKS2-a78e3f4b723a42a3abd44d699a55c416-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:09:34.126836 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:34.126858 | orchestrator | 2026-02-02 06:09:34.126892 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 06:09:34.126912 | orchestrator | Monday 02 February 2026 06:09:33 +0000 (0:00:01.910) 0:36:01.490 ******* 2026-02-02 06:09:34.126933 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:34.126967 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a', 'dm-uuid-LVM-nQNI9mGSypmWJN7Kribh0RNL5qLQKFSceYxT4mfzBYfoYiha3ZzoEdYR0rTnnIvK'], 'uuids': ['a78e3f4b-723a-42a3-abd4-4d699a55c416'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:35.360910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6', 'scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c15f901f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:35.361063 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HOxmXw-N5cX-V1Nz-Lu3r-OQk9-N5gG-1syyTi', 'scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4', 'scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:35.361106 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:35.361120 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:35.361132 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:35.361169 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:35.361182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO', 'dm-uuid-CRYPT-LUKS2-8edeb25f170042ba8e6d0505727d2968-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:35.361193 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:35.361212 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379', 'dm-uuid-LVM-2Xx1rXy8ZvvzVeymXUM2Y23jmTeKUn30gyH8a84MHrJn7bcz7phSu8LEA3bm3DqO'], 'uuids': ['8edeb25f-1700-42ba-8e6d-0505727d2968'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:35.361224 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yf6lEa-f3nO-iewk-DEDy-Fb6j-Kq2P-dbkgMf', 'scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc', 'scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:35.361330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:54.918731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2944b273', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:54.918841 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:54.918852 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:54.918876 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK', 'dm-uuid-CRYPT-LUKS2-a78e3f4b723a42a3abd44d699a55c416-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:09:54.918884 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:54.918893 | orchestrator | 2026-02-02 06:09:54.918900 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 06:09:54.918908 | orchestrator | Monday 02 February 2026 06:09:35 +0000 (0:00:01.447) 0:36:02.938 ******* 2026-02-02 06:09:54.918914 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:54.918921 | orchestrator | 2026-02-02 06:09:54.918928 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 06:09:54.918934 | orchestrator | Monday 02 February 2026 06:09:36 +0000 (0:00:01.482) 0:36:04.420 ******* 2026-02-02 06:09:54.918946 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:54.918952 | orchestrator | 2026-02-02 06:09:54.918959 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:09:54.918965 | orchestrator | Monday 02 February 2026 06:09:38 +0000 (0:00:01.231) 0:36:05.651 ******* 2026-02-02 06:09:54.918971 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:54.918977 | orchestrator | 2026-02-02 06:09:54.918983 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:09:54.918990 | orchestrator | Monday 02 February 2026 06:09:39 +0000 (0:00:01.506) 0:36:07.158 ******* 2026-02-02 06:09:54.918996 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:54.919002 | orchestrator | 2026-02-02 06:09:54.919008 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:09:54.919015 | orchestrator | Monday 02 February 2026 06:09:40 +0000 (0:00:01.103) 0:36:08.262 ******* 2026-02-02 06:09:54.919021 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:54.919027 | orchestrator | 2026-02-02 06:09:54.919033 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:09:54.919039 | orchestrator | Monday 02 February 2026 06:09:41 +0000 (0:00:01.239) 0:36:09.502 ******* 2026-02-02 06:09:54.919045 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:54.919052 | orchestrator | 2026-02-02 06:09:54.919058 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 06:09:54.919064 | orchestrator | Monday 02 February 2026 06:09:43 +0000 (0:00:01.122) 0:36:10.624 ******* 2026-02-02 06:09:54.919070 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-02 06:09:54.919076 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-02 06:09:54.919082 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-02 06:09:54.919089 | orchestrator | 2026-02-02 06:09:54.919095 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 06:09:54.919101 | orchestrator | Monday 02 February 2026 06:09:45 +0000 (0:00:01.997) 0:36:12.621 ******* 2026-02-02 06:09:54.919108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 06:09:54.919114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 06:09:54.919120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 06:09:54.919126 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:54.919132 | orchestrator | 2026-02-02 06:09:54.919138 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 06:09:54.919144 | orchestrator | Monday 02 February 2026 06:09:46 +0000 (0:00:01.188) 0:36:13.810 ******* 2026-02-02 06:09:54.919151 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-02 06:09:54.919157 | orchestrator | 2026-02-02 06:09:54.919165 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:09:54.919173 | orchestrator | Monday 02 February 2026 06:09:47 +0000 (0:00:01.133) 0:36:14.944 ******* 2026-02-02 06:09:54.919179 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:54.919185 | orchestrator | 2026-02-02 06:09:54.919191 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:09:54.919197 | orchestrator | Monday 02 February 2026 06:09:48 +0000 (0:00:01.107) 0:36:16.051 ******* 2026-02-02 06:09:54.919203 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:54.919210 | orchestrator | 2026-02-02 06:09:54.919216 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:09:54.919222 | orchestrator | Monday 02 February 2026 06:09:49 +0000 (0:00:01.258) 0:36:17.310 ******* 2026-02-02 06:09:54.919228 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:54.919234 | orchestrator | 2026-02-02 06:09:54.919323 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:09:54.919332 | orchestrator | Monday 02 February 2026 06:09:50 +0000 (0:00:01.133) 0:36:18.444 ******* 2026-02-02 06:09:54.919340 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:09:54.919353 | orchestrator | 2026-02-02 06:09:54.919360 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:09:54.919368 | orchestrator | Monday 02 February 2026 06:09:52 +0000 (0:00:01.229) 0:36:19.674 ******* 2026-02-02 06:09:54.919375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:09:54.919383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:09:54.919390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:09:54.919398 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:54.919405 | orchestrator | 2026-02-02 06:09:54.919412 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:09:54.919420 | orchestrator | Monday 02 February 2026 06:09:53 +0000 (0:00:01.405) 0:36:21.081 ******* 2026-02-02 06:09:54.919428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:09:54.919435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:09:54.919443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:09:54.919450 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:09:54.919458 | orchestrator | 2026-02-02 06:09:54.919471 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:10:42.872927 | orchestrator | Monday 02 February 2026 06:09:54 +0000 (0:00:01.404) 0:36:22.486 ******* 2026-02-02 06:10:42.873050 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:10:42.873064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:10:42.873075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:10:42.873085 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.873095 | orchestrator | 2026-02-02 06:10:42.873107 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:10:42.873117 | orchestrator | Monday 02 February 2026 06:09:56 +0000 (0:00:01.430) 0:36:23.916 ******* 2026-02-02 06:10:42.873127 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.873137 | orchestrator | 2026-02-02 06:10:42.873147 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:10:42.873156 | orchestrator | Monday 02 February 2026 06:09:57 +0000 (0:00:01.149) 0:36:25.066 ******* 2026-02-02 06:10:42.873166 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 06:10:42.873176 | orchestrator | 2026-02-02 06:10:42.873185 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 06:10:42.873196 | orchestrator | Monday 02 February 2026 06:09:58 +0000 (0:00:01.366) 0:36:26.432 ******* 2026-02-02 06:10:42.873212 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:10:42.873276 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:10:42.873286 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:10:42.873296 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-02 06:10:42.873305 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:10:42.873315 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:10:42.873324 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:10:42.873334 | orchestrator | 2026-02-02 06:10:42.873344 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 06:10:42.873354 | orchestrator | Monday 02 February 2026 06:10:01 +0000 (0:00:02.335) 0:36:28.767 ******* 2026-02-02 06:10:42.873364 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:10:42.873373 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:10:42.873383 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:10:42.873414 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-02 06:10:42.873424 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:10:42.873433 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:10:42.873443 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:10:42.873452 | orchestrator | 2026-02-02 06:10:42.873462 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-02 06:10:42.873471 | orchestrator | Monday 02 February 2026 06:10:03 +0000 (0:00:02.678) 0:36:31.446 ******* 2026-02-02 06:10:42.873480 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.873490 | orchestrator | 2026-02-02 06:10:42.873499 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-02 06:10:42.873509 | orchestrator | Monday 02 February 2026 06:10:05 +0000 (0:00:01.459) 0:36:32.905 ******* 2026-02-02 06:10:42.873518 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.873527 | orchestrator | 2026-02-02 06:10:42.873537 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-02 06:10:42.873546 | orchestrator | Monday 02 February 2026 06:10:06 +0000 (0:00:01.196) 0:36:34.101 ******* 2026-02-02 06:10:42.873556 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.873565 | orchestrator | 2026-02-02 06:10:42.873574 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-02 06:10:42.873584 | orchestrator | Monday 02 February 2026 06:10:08 +0000 (0:00:01.705) 0:36:35.807 ******* 2026-02-02 06:10:42.873593 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-02 06:10:42.873603 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-02 06:10:42.873612 | orchestrator | 2026-02-02 06:10:42.873622 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 06:10:42.873631 | orchestrator | Monday 02 February 2026 06:10:12 +0000 (0:00:04.080) 0:36:39.888 ******* 2026-02-02 06:10:42.873641 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-02 06:10:42.873651 | orchestrator | 2026-02-02 06:10:42.873661 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 06:10:42.873670 | orchestrator | Monday 02 February 2026 06:10:13 +0000 (0:00:01.114) 0:36:41.002 ******* 2026-02-02 06:10:42.873680 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-02 06:10:42.873689 | orchestrator | 2026-02-02 06:10:42.873699 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 06:10:42.873708 | orchestrator | Monday 02 February 2026 06:10:14 +0000 (0:00:01.108) 0:36:42.110 ******* 2026-02-02 06:10:42.873717 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.873727 | orchestrator | 2026-02-02 06:10:42.873737 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 06:10:42.873746 | orchestrator | Monday 02 February 2026 06:10:15 +0000 (0:00:01.118) 0:36:43.229 ******* 2026-02-02 06:10:42.873755 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.873765 | orchestrator | 2026-02-02 06:10:42.873775 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 06:10:42.873801 | orchestrator | Monday 02 February 2026 06:10:17 +0000 (0:00:01.477) 0:36:44.707 ******* 2026-02-02 06:10:42.873812 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.873821 | orchestrator | 2026-02-02 06:10:42.873836 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 06:10:42.873846 | orchestrator | Monday 02 February 2026 06:10:18 +0000 (0:00:01.518) 0:36:46.226 ******* 2026-02-02 06:10:42.873855 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.873865 | orchestrator | 2026-02-02 06:10:42.873874 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 06:10:42.873883 | orchestrator | Monday 02 February 2026 06:10:20 +0000 (0:00:01.506) 0:36:47.733 ******* 2026-02-02 06:10:42.873893 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.873910 | orchestrator | 2026-02-02 06:10:42.873920 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 06:10:42.873929 | orchestrator | Monday 02 February 2026 06:10:21 +0000 (0:00:01.181) 0:36:48.914 ******* 2026-02-02 06:10:42.873939 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.873948 | orchestrator | 2026-02-02 06:10:42.873957 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 06:10:42.873967 | orchestrator | Monday 02 February 2026 06:10:22 +0000 (0:00:01.144) 0:36:50.059 ******* 2026-02-02 06:10:42.873976 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.873985 | orchestrator | 2026-02-02 06:10:42.873995 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 06:10:42.874004 | orchestrator | Monday 02 February 2026 06:10:23 +0000 (0:00:01.125) 0:36:51.184 ******* 2026-02-02 06:10:42.874013 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.874092 | orchestrator | 2026-02-02 06:10:42.874103 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 06:10:42.874112 | orchestrator | Monday 02 February 2026 06:10:25 +0000 (0:00:01.720) 0:36:52.904 ******* 2026-02-02 06:10:42.874122 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.874141 | orchestrator | 2026-02-02 06:10:42.874151 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 06:10:42.874160 | orchestrator | Monday 02 February 2026 06:10:26 +0000 (0:00:01.572) 0:36:54.477 ******* 2026-02-02 06:10:42.874170 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.874179 | orchestrator | 2026-02-02 06:10:42.874188 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 06:10:42.874198 | orchestrator | Monday 02 February 2026 06:10:28 +0000 (0:00:01.169) 0:36:55.647 ******* 2026-02-02 06:10:42.874207 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.874216 | orchestrator | 2026-02-02 06:10:42.874302 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 06:10:42.874313 | orchestrator | Monday 02 February 2026 06:10:29 +0000 (0:00:01.118) 0:36:56.765 ******* 2026-02-02 06:10:42.874322 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.874331 | orchestrator | 2026-02-02 06:10:42.874341 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 06:10:42.874350 | orchestrator | Monday 02 February 2026 06:10:30 +0000 (0:00:01.206) 0:36:57.971 ******* 2026-02-02 06:10:42.874360 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.874369 | orchestrator | 2026-02-02 06:10:42.874379 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 06:10:42.874388 | orchestrator | Monday 02 February 2026 06:10:31 +0000 (0:00:01.154) 0:36:59.127 ******* 2026-02-02 06:10:42.874398 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.874407 | orchestrator | 2026-02-02 06:10:42.874416 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 06:10:42.874426 | orchestrator | Monday 02 February 2026 06:10:32 +0000 (0:00:01.149) 0:37:00.278 ******* 2026-02-02 06:10:42.874435 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.874444 | orchestrator | 2026-02-02 06:10:42.874454 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 06:10:42.874463 | orchestrator | Monday 02 February 2026 06:10:33 +0000 (0:00:01.092) 0:37:01.371 ******* 2026-02-02 06:10:42.874473 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.874482 | orchestrator | 2026-02-02 06:10:42.874491 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 06:10:42.874501 | orchestrator | Monday 02 February 2026 06:10:34 +0000 (0:00:01.115) 0:37:02.486 ******* 2026-02-02 06:10:42.874510 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.874519 | orchestrator | 2026-02-02 06:10:42.874529 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 06:10:42.874538 | orchestrator | Monday 02 February 2026 06:10:36 +0000 (0:00:01.111) 0:37:03.597 ******* 2026-02-02 06:10:42.874547 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.874565 | orchestrator | 2026-02-02 06:10:42.874575 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 06:10:42.874584 | orchestrator | Monday 02 February 2026 06:10:37 +0000 (0:00:01.113) 0:37:04.712 ******* 2026-02-02 06:10:42.874594 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:10:42.874603 | orchestrator | 2026-02-02 06:10:42.874612 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 06:10:42.874622 | orchestrator | Monday 02 February 2026 06:10:38 +0000 (0:00:01.120) 0:37:05.832 ******* 2026-02-02 06:10:42.874631 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.874641 | orchestrator | 2026-02-02 06:10:42.874650 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 06:10:42.874660 | orchestrator | Monday 02 February 2026 06:10:39 +0000 (0:00:01.248) 0:37:07.081 ******* 2026-02-02 06:10:42.874669 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.874678 | orchestrator | 2026-02-02 06:10:42.874688 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 06:10:42.874697 | orchestrator | Monday 02 February 2026 06:10:40 +0000 (0:00:01.139) 0:37:08.221 ******* 2026-02-02 06:10:42.874706 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.874716 | orchestrator | 2026-02-02 06:10:42.874725 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 06:10:42.874735 | orchestrator | Monday 02 February 2026 06:10:41 +0000 (0:00:01.110) 0:37:09.331 ******* 2026-02-02 06:10:42.874744 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:10:42.874754 | orchestrator | 2026-02-02 06:10:42.874771 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 06:11:32.237781 | orchestrator | Monday 02 February 2026 06:10:42 +0000 (0:00:01.110) 0:37:10.442 ******* 2026-02-02 06:11:32.237900 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.237915 | orchestrator | 2026-02-02 06:11:32.237924 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 06:11:32.237941 | orchestrator | Monday 02 February 2026 06:10:44 +0000 (0:00:01.245) 0:37:11.688 ******* 2026-02-02 06:11:32.237989 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.237998 | orchestrator | 2026-02-02 06:11:32.238006 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 06:11:32.238013 | orchestrator | Monday 02 February 2026 06:10:45 +0000 (0:00:01.158) 0:37:12.846 ******* 2026-02-02 06:11:32.238051 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238058 | orchestrator | 2026-02-02 06:11:32.238065 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 06:11:32.238073 | orchestrator | Monday 02 February 2026 06:10:46 +0000 (0:00:01.195) 0:37:14.041 ******* 2026-02-02 06:11:32.238081 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238087 | orchestrator | 2026-02-02 06:11:32.238094 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 06:11:32.238101 | orchestrator | Monday 02 February 2026 06:10:47 +0000 (0:00:01.201) 0:37:15.243 ******* 2026-02-02 06:11:32.238107 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238114 | orchestrator | 2026-02-02 06:11:32.238121 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 06:11:32.238128 | orchestrator | Monday 02 February 2026 06:10:48 +0000 (0:00:01.116) 0:37:16.359 ******* 2026-02-02 06:11:32.238135 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238142 | orchestrator | 2026-02-02 06:11:32.238149 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 06:11:32.238156 | orchestrator | Monday 02 February 2026 06:10:49 +0000 (0:00:01.090) 0:37:17.450 ******* 2026-02-02 06:11:32.238162 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238169 | orchestrator | 2026-02-02 06:11:32.238176 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 06:11:32.238182 | orchestrator | Monday 02 February 2026 06:10:50 +0000 (0:00:01.124) 0:37:18.575 ******* 2026-02-02 06:11:32.238252 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238261 | orchestrator | 2026-02-02 06:11:32.238268 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 06:11:32.238274 | orchestrator | Monday 02 February 2026 06:10:52 +0000 (0:00:01.111) 0:37:19.687 ******* 2026-02-02 06:11:32.238281 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:11:32.238289 | orchestrator | 2026-02-02 06:11:32.238296 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 06:11:32.238303 | orchestrator | Monday 02 February 2026 06:10:54 +0000 (0:00:01.998) 0:37:21.686 ******* 2026-02-02 06:11:32.238310 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:11:32.238317 | orchestrator | 2026-02-02 06:11:32.238324 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 06:11:32.238332 | orchestrator | Monday 02 February 2026 06:10:56 +0000 (0:00:02.160) 0:37:23.846 ******* 2026-02-02 06:11:32.238339 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-02 06:11:32.238347 | orchestrator | 2026-02-02 06:11:32.238353 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 06:11:32.238359 | orchestrator | Monday 02 February 2026 06:10:57 +0000 (0:00:01.103) 0:37:24.950 ******* 2026-02-02 06:11:32.238365 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238372 | orchestrator | 2026-02-02 06:11:32.238378 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 06:11:32.238384 | orchestrator | Monday 02 February 2026 06:10:58 +0000 (0:00:01.141) 0:37:26.092 ******* 2026-02-02 06:11:32.238390 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238395 | orchestrator | 2026-02-02 06:11:32.238401 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 06:11:32.238408 | orchestrator | Monday 02 February 2026 06:10:59 +0000 (0:00:01.259) 0:37:27.351 ******* 2026-02-02 06:11:32.238414 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 06:11:32.238420 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 06:11:32.238426 | orchestrator | 2026-02-02 06:11:32.238432 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 06:11:32.238439 | orchestrator | Monday 02 February 2026 06:11:01 +0000 (0:00:01.902) 0:37:29.253 ******* 2026-02-02 06:11:32.238445 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:11:32.238452 | orchestrator | 2026-02-02 06:11:32.238458 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 06:11:32.238464 | orchestrator | Monday 02 February 2026 06:11:03 +0000 (0:00:01.473) 0:37:30.726 ******* 2026-02-02 06:11:32.238470 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238477 | orchestrator | 2026-02-02 06:11:32.238483 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 06:11:32.238489 | orchestrator | Monday 02 February 2026 06:11:04 +0000 (0:00:01.126) 0:37:31.853 ******* 2026-02-02 06:11:32.238495 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238502 | orchestrator | 2026-02-02 06:11:32.238508 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 06:11:32.238515 | orchestrator | Monday 02 February 2026 06:11:05 +0000 (0:00:01.213) 0:37:33.067 ******* 2026-02-02 06:11:32.238522 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238529 | orchestrator | 2026-02-02 06:11:32.238535 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 06:11:32.238542 | orchestrator | Monday 02 February 2026 06:11:06 +0000 (0:00:01.132) 0:37:34.200 ******* 2026-02-02 06:11:32.238548 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-02 06:11:32.238555 | orchestrator | 2026-02-02 06:11:32.238562 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 06:11:32.238586 | orchestrator | Monday 02 February 2026 06:11:07 +0000 (0:00:01.166) 0:37:35.366 ******* 2026-02-02 06:11:32.238602 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:11:32.238618 | orchestrator | 2026-02-02 06:11:32.238625 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 06:11:32.238632 | orchestrator | Monday 02 February 2026 06:11:09 +0000 (0:00:01.739) 0:37:37.106 ******* 2026-02-02 06:11:32.238639 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 06:11:32.238645 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 06:11:32.238652 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 06:11:32.238658 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238664 | orchestrator | 2026-02-02 06:11:32.238670 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 06:11:32.238677 | orchestrator | Monday 02 February 2026 06:11:10 +0000 (0:00:01.187) 0:37:38.294 ******* 2026-02-02 06:11:32.238683 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238688 | orchestrator | 2026-02-02 06:11:32.238694 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 06:11:32.238701 | orchestrator | Monday 02 February 2026 06:11:11 +0000 (0:00:01.159) 0:37:39.453 ******* 2026-02-02 06:11:32.238707 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238714 | orchestrator | 2026-02-02 06:11:32.238720 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 06:11:32.238726 | orchestrator | Monday 02 February 2026 06:11:13 +0000 (0:00:01.207) 0:37:40.661 ******* 2026-02-02 06:11:32.238733 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238739 | orchestrator | 2026-02-02 06:11:32.238745 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 06:11:32.238751 | orchestrator | Monday 02 February 2026 06:11:14 +0000 (0:00:01.178) 0:37:41.840 ******* 2026-02-02 06:11:32.238757 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238764 | orchestrator | 2026-02-02 06:11:32.238771 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 06:11:32.238778 | orchestrator | Monday 02 February 2026 06:11:15 +0000 (0:00:01.179) 0:37:43.020 ******* 2026-02-02 06:11:32.238785 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238792 | orchestrator | 2026-02-02 06:11:32.238800 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 06:11:32.238807 | orchestrator | Monday 02 February 2026 06:11:16 +0000 (0:00:01.197) 0:37:44.217 ******* 2026-02-02 06:11:32.238814 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:11:32.238820 | orchestrator | 2026-02-02 06:11:32.238826 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 06:11:32.238833 | orchestrator | Monday 02 February 2026 06:11:19 +0000 (0:00:02.447) 0:37:46.665 ******* 2026-02-02 06:11:32.238839 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:11:32.238846 | orchestrator | 2026-02-02 06:11:32.238852 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 06:11:32.238859 | orchestrator | Monday 02 February 2026 06:11:20 +0000 (0:00:01.165) 0:37:47.831 ******* 2026-02-02 06:11:32.238865 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-02 06:11:32.238871 | orchestrator | 2026-02-02 06:11:32.238877 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 06:11:32.238884 | orchestrator | Monday 02 February 2026 06:11:21 +0000 (0:00:01.178) 0:37:49.009 ******* 2026-02-02 06:11:32.238890 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238897 | orchestrator | 2026-02-02 06:11:32.238903 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 06:11:32.238910 | orchestrator | Monday 02 February 2026 06:11:22 +0000 (0:00:01.191) 0:37:50.201 ******* 2026-02-02 06:11:32.238916 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238923 | orchestrator | 2026-02-02 06:11:32.238929 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 06:11:32.238935 | orchestrator | Monday 02 February 2026 06:11:23 +0000 (0:00:01.117) 0:37:51.319 ******* 2026-02-02 06:11:32.238952 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238959 | orchestrator | 2026-02-02 06:11:32.238965 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 06:11:32.238972 | orchestrator | Monday 02 February 2026 06:11:24 +0000 (0:00:01.238) 0:37:52.557 ******* 2026-02-02 06:11:32.238979 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.238985 | orchestrator | 2026-02-02 06:11:32.238992 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 06:11:32.238999 | orchestrator | Monday 02 February 2026 06:11:26 +0000 (0:00:01.259) 0:37:53.816 ******* 2026-02-02 06:11:32.239066 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.239074 | orchestrator | 2026-02-02 06:11:32.239081 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 06:11:32.239087 | orchestrator | Monday 02 February 2026 06:11:27 +0000 (0:00:01.132) 0:37:54.948 ******* 2026-02-02 06:11:32.239093 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.239099 | orchestrator | 2026-02-02 06:11:32.239105 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 06:11:32.239111 | orchestrator | Monday 02 February 2026 06:11:28 +0000 (0:00:01.130) 0:37:56.079 ******* 2026-02-02 06:11:32.239117 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.239124 | orchestrator | 2026-02-02 06:11:32.239130 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 06:11:32.239136 | orchestrator | Monday 02 February 2026 06:11:29 +0000 (0:00:01.243) 0:37:57.322 ******* 2026-02-02 06:11:32.239142 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:11:32.239149 | orchestrator | 2026-02-02 06:11:32.239156 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 06:11:32.239162 | orchestrator | Monday 02 February 2026 06:11:30 +0000 (0:00:01.150) 0:37:58.473 ******* 2026-02-02 06:11:32.239168 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:11:32.239176 | orchestrator | 2026-02-02 06:11:32.239182 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 06:11:32.239231 | orchestrator | Monday 02 February 2026 06:11:32 +0000 (0:00:01.328) 0:37:59.801 ******* 2026-02-02 06:12:21.955758 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-02 06:12:21.955908 | orchestrator | 2026-02-02 06:12:21.955937 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 06:12:21.955956 | orchestrator | Monday 02 February 2026 06:11:33 +0000 (0:00:01.139) 0:38:00.941 ******* 2026-02-02 06:12:21.955974 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-02 06:12:21.955994 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-02 06:12:21.956011 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-02 06:12:21.956029 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-02 06:12:21.956048 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-02 06:12:21.956068 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-02 06:12:21.956087 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-02 06:12:21.956107 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-02 06:12:21.956126 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 06:12:21.956144 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 06:12:21.956164 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 06:12:21.956181 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 06:12:21.956237 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 06:12:21.956257 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 06:12:21.956276 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-02 06:12:21.956294 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-02 06:12:21.956348 | orchestrator | 2026-02-02 06:12:21.956371 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 06:12:21.956388 | orchestrator | Monday 02 February 2026 06:11:39 +0000 (0:00:06.561) 0:38:07.503 ******* 2026-02-02 06:12:21.956404 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-02 06:12:21.956416 | orchestrator | 2026-02-02 06:12:21.956427 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-02 06:12:21.956438 | orchestrator | Monday 02 February 2026 06:11:41 +0000 (0:00:01.599) 0:38:09.103 ******* 2026-02-02 06:12:21.956450 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 06:12:21.956463 | orchestrator | 2026-02-02 06:12:21.956474 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-02 06:12:21.956484 | orchestrator | Monday 02 February 2026 06:11:43 +0000 (0:00:01.484) 0:38:10.587 ******* 2026-02-02 06:12:21.956493 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 06:12:21.956503 | orchestrator | 2026-02-02 06:12:21.956512 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 06:12:21.956522 | orchestrator | Monday 02 February 2026 06:11:45 +0000 (0:00:02.054) 0:38:12.641 ******* 2026-02-02 06:12:21.956531 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.956541 | orchestrator | 2026-02-02 06:12:21.956551 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 06:12:21.956560 | orchestrator | Monday 02 February 2026 06:11:46 +0000 (0:00:01.118) 0:38:13.759 ******* 2026-02-02 06:12:21.956570 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.956579 | orchestrator | 2026-02-02 06:12:21.956589 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 06:12:21.956598 | orchestrator | Monday 02 February 2026 06:11:47 +0000 (0:00:01.129) 0:38:14.889 ******* 2026-02-02 06:12:21.956607 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.956617 | orchestrator | 2026-02-02 06:12:21.956627 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 06:12:21.956636 | orchestrator | Monday 02 February 2026 06:11:48 +0000 (0:00:01.171) 0:38:16.061 ******* 2026-02-02 06:12:21.956646 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.956655 | orchestrator | 2026-02-02 06:12:21.956665 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 06:12:21.956674 | orchestrator | Monday 02 February 2026 06:11:49 +0000 (0:00:01.132) 0:38:17.194 ******* 2026-02-02 06:12:21.956683 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.956694 | orchestrator | 2026-02-02 06:12:21.956704 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 06:12:21.956713 | orchestrator | Monday 02 February 2026 06:11:50 +0000 (0:00:01.185) 0:38:18.380 ******* 2026-02-02 06:12:21.956723 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.956733 | orchestrator | 2026-02-02 06:12:21.956742 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 06:12:21.956752 | orchestrator | Monday 02 February 2026 06:11:51 +0000 (0:00:01.095) 0:38:19.475 ******* 2026-02-02 06:12:21.956761 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.956771 | orchestrator | 2026-02-02 06:12:21.956780 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 06:12:21.956790 | orchestrator | Monday 02 February 2026 06:11:53 +0000 (0:00:01.110) 0:38:20.586 ******* 2026-02-02 06:12:21.956800 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.956809 | orchestrator | 2026-02-02 06:12:21.956819 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 06:12:21.956828 | orchestrator | Monday 02 February 2026 06:11:54 +0000 (0:00:01.102) 0:38:21.689 ******* 2026-02-02 06:12:21.956845 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.956855 | orchestrator | 2026-02-02 06:12:21.956900 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 06:12:21.956912 | orchestrator | Monday 02 February 2026 06:11:55 +0000 (0:00:01.124) 0:38:22.813 ******* 2026-02-02 06:12:21.956921 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.956931 | orchestrator | 2026-02-02 06:12:21.956940 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 06:12:21.956950 | orchestrator | Monday 02 February 2026 06:11:56 +0000 (0:00:01.141) 0:38:23.955 ******* 2026-02-02 06:12:21.956960 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:12:21.956969 | orchestrator | 2026-02-02 06:12:21.956979 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 06:12:21.956988 | orchestrator | Monday 02 February 2026 06:11:57 +0000 (0:00:01.211) 0:38:25.167 ******* 2026-02-02 06:12:21.956998 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-02 06:12:21.957007 | orchestrator | 2026-02-02 06:12:21.957017 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 06:12:21.957026 | orchestrator | Monday 02 February 2026 06:12:02 +0000 (0:00:04.432) 0:38:29.599 ******* 2026-02-02 06:12:21.957036 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 06:12:21.957045 | orchestrator | 2026-02-02 06:12:21.957055 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 06:12:21.957064 | orchestrator | Monday 02 February 2026 06:12:03 +0000 (0:00:01.249) 0:38:30.848 ******* 2026-02-02 06:12:21.957075 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-02 06:12:21.957088 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-02 06:12:21.957099 | orchestrator | 2026-02-02 06:12:21.957109 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 06:12:21.957118 | orchestrator | Monday 02 February 2026 06:12:10 +0000 (0:00:07.496) 0:38:38.345 ******* 2026-02-02 06:12:21.957128 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.957137 | orchestrator | 2026-02-02 06:12:21.957146 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 06:12:21.957156 | orchestrator | Monday 02 February 2026 06:12:11 +0000 (0:00:01.121) 0:38:39.467 ******* 2026-02-02 06:12:21.957165 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.957174 | orchestrator | 2026-02-02 06:12:21.957184 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:12:21.957216 | orchestrator | Monday 02 February 2026 06:12:12 +0000 (0:00:01.105) 0:38:40.573 ******* 2026-02-02 06:12:21.957225 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.957235 | orchestrator | 2026-02-02 06:12:21.957245 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:12:21.957254 | orchestrator | Monday 02 February 2026 06:12:14 +0000 (0:00:01.123) 0:38:41.696 ******* 2026-02-02 06:12:21.957264 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.957273 | orchestrator | 2026-02-02 06:12:21.957282 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:12:21.957292 | orchestrator | Monday 02 February 2026 06:12:15 +0000 (0:00:01.168) 0:38:42.865 ******* 2026-02-02 06:12:21.957302 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.957317 | orchestrator | 2026-02-02 06:12:21.957327 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:12:21.957337 | orchestrator | Monday 02 February 2026 06:12:16 +0000 (0:00:01.142) 0:38:44.008 ******* 2026-02-02 06:12:21.957346 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:12:21.957356 | orchestrator | 2026-02-02 06:12:21.957365 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:12:21.957374 | orchestrator | Monday 02 February 2026 06:12:17 +0000 (0:00:01.235) 0:38:45.243 ******* 2026-02-02 06:12:21.957384 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:12:21.957393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:12:21.957402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:12:21.957412 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.957421 | orchestrator | 2026-02-02 06:12:21.957431 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:12:21.957440 | orchestrator | Monday 02 February 2026 06:12:19 +0000 (0:00:01.366) 0:38:46.609 ******* 2026-02-02 06:12:21.957449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:12:21.957459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:12:21.957468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:12:21.957477 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:12:21.957487 | orchestrator | 2026-02-02 06:12:21.957496 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:12:21.957506 | orchestrator | Monday 02 February 2026 06:12:20 +0000 (0:00:01.427) 0:38:48.037 ******* 2026-02-02 06:12:21.957515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:12:21.957525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:12:21.957541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:13:22.154438 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:13:22.154555 | orchestrator | 2026-02-02 06:13:22.154572 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:13:22.154586 | orchestrator | Monday 02 February 2026 06:12:21 +0000 (0:00:01.488) 0:38:49.525 ******* 2026-02-02 06:13:22.154598 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:13:22.154610 | orchestrator | 2026-02-02 06:13:22.154621 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:13:22.154632 | orchestrator | Monday 02 February 2026 06:12:23 +0000 (0:00:01.173) 0:38:50.699 ******* 2026-02-02 06:13:22.154643 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 06:13:22.154654 | orchestrator | 2026-02-02 06:13:22.154665 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 06:13:22.154676 | orchestrator | Monday 02 February 2026 06:12:24 +0000 (0:00:01.676) 0:38:52.375 ******* 2026-02-02 06:13:22.154688 | orchestrator | changed: [testbed-node-3] 2026-02-02 06:13:22.154698 | orchestrator | 2026-02-02 06:13:22.154709 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-02 06:13:22.154720 | orchestrator | Monday 02 February 2026 06:12:27 +0000 (0:00:02.238) 0:38:54.614 ******* 2026-02-02 06:13:22.154731 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:13:22.154742 | orchestrator | 2026-02-02 06:13:22.154753 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-02 06:13:22.154764 | orchestrator | Monday 02 February 2026 06:12:28 +0000 (0:00:01.129) 0:38:55.743 ******* 2026-02-02 06:13:22.154774 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:13:22.154836 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:13:22.154849 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:13:22.154860 | orchestrator | 2026-02-02 06:13:22.154872 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-02 06:13:22.154908 | orchestrator | Monday 02 February 2026 06:12:29 +0000 (0:00:01.718) 0:38:57.462 ******* 2026-02-02 06:13:22.154920 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-02-02 06:13:22.154931 | orchestrator | 2026-02-02 06:13:22.154942 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-02 06:13:22.154953 | orchestrator | Monday 02 February 2026 06:12:31 +0000 (0:00:01.600) 0:38:59.062 ******* 2026-02-02 06:13:22.154964 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:13:22.154978 | orchestrator | 2026-02-02 06:13:22.154991 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-02 06:13:22.155004 | orchestrator | Monday 02 February 2026 06:12:32 +0000 (0:00:01.137) 0:39:00.200 ******* 2026-02-02 06:13:22.155016 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:13:22.155029 | orchestrator | 2026-02-02 06:13:22.155042 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-02 06:13:22.155054 | orchestrator | Monday 02 February 2026 06:12:33 +0000 (0:00:01.178) 0:39:01.378 ******* 2026-02-02 06:13:22.155066 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:13:22.155079 | orchestrator | 2026-02-02 06:13:22.155091 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-02 06:13:22.155104 | orchestrator | Monday 02 February 2026 06:12:35 +0000 (0:00:01.457) 0:39:02.835 ******* 2026-02-02 06:13:22.155117 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:13:22.155129 | orchestrator | 2026-02-02 06:13:22.155142 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-02 06:13:22.155155 | orchestrator | Monday 02 February 2026 06:12:36 +0000 (0:00:01.187) 0:39:04.023 ******* 2026-02-02 06:13:22.155167 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-02 06:13:22.155209 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-02 06:13:22.155224 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-02 06:13:22.155237 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-02 06:13:22.155250 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-02 06:13:22.155263 | orchestrator | 2026-02-02 06:13:22.155276 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-02 06:13:22.155288 | orchestrator | Monday 02 February 2026 06:12:39 +0000 (0:00:03.019) 0:39:07.042 ******* 2026-02-02 06:13:22.155300 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:13:22.155313 | orchestrator | 2026-02-02 06:13:22.155326 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-02 06:13:22.155338 | orchestrator | Monday 02 February 2026 06:12:40 +0000 (0:00:01.115) 0:39:08.158 ******* 2026-02-02 06:13:22.155352 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-02-02 06:13:22.155363 | orchestrator | 2026-02-02 06:13:22.155374 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-02 06:13:22.155384 | orchestrator | Monday 02 February 2026 06:12:42 +0000 (0:00:01.681) 0:39:09.840 ******* 2026-02-02 06:13:22.155395 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-02 06:13:22.155406 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-02 06:13:22.155417 | orchestrator | 2026-02-02 06:13:22.155428 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-02 06:13:22.155439 | orchestrator | Monday 02 February 2026 06:12:44 +0000 (0:00:01.845) 0:39:11.686 ******* 2026-02-02 06:13:22.155450 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:13:22.155461 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 06:13:22.155472 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 06:13:22.155483 | orchestrator | 2026-02-02 06:13:22.155517 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:13:22.155538 | orchestrator | Monday 02 February 2026 06:12:47 +0000 (0:00:03.204) 0:39:14.891 ******* 2026-02-02 06:13:22.155548 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-02 06:13:22.155559 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 06:13:22.155570 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:13:22.155581 | orchestrator | 2026-02-02 06:13:22.155592 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-02 06:13:22.155602 | orchestrator | Monday 02 February 2026 06:12:49 +0000 (0:00:01.943) 0:39:16.834 ******* 2026-02-02 06:13:22.155613 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:13:22.155624 | orchestrator | 2026-02-02 06:13:22.155634 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-02 06:13:22.155645 | orchestrator | Monday 02 February 2026 06:12:50 +0000 (0:00:01.226) 0:39:18.061 ******* 2026-02-02 06:13:22.155656 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:13:22.155667 | orchestrator | 2026-02-02 06:13:22.155677 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-02 06:13:22.155688 | orchestrator | Monday 02 February 2026 06:12:51 +0000 (0:00:01.155) 0:39:19.216 ******* 2026-02-02 06:13:22.155699 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:13:22.155712 | orchestrator | 2026-02-02 06:13:22.155730 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-02 06:13:22.155749 | orchestrator | Monday 02 February 2026 06:12:52 +0000 (0:00:01.112) 0:39:20.329 ******* 2026-02-02 06:13:22.155768 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-02-02 06:13:22.155779 | orchestrator | 2026-02-02 06:13:22.155790 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-02 06:13:22.155801 | orchestrator | Monday 02 February 2026 06:12:54 +0000 (0:00:01.493) 0:39:21.822 ******* 2026-02-02 06:13:22.155811 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:13:22.155822 | orchestrator | 2026-02-02 06:13:22.155833 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-02 06:13:22.155844 | orchestrator | Monday 02 February 2026 06:12:55 +0000 (0:00:01.494) 0:39:23.316 ******* 2026-02-02 06:13:22.155854 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:13:22.155865 | orchestrator | 2026-02-02 06:13:22.155876 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-02 06:13:22.155887 | orchestrator | Monday 02 February 2026 06:12:59 +0000 (0:00:03.433) 0:39:26.750 ******* 2026-02-02 06:13:22.155897 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-02-02 06:13:22.155911 | orchestrator | 2026-02-02 06:13:22.155931 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-02 06:13:22.155950 | orchestrator | Monday 02 February 2026 06:13:00 +0000 (0:00:01.506) 0:39:28.256 ******* 2026-02-02 06:13:22.155969 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:13:22.155988 | orchestrator | 2026-02-02 06:13:22.156008 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-02 06:13:22.156028 | orchestrator | Monday 02 February 2026 06:13:02 +0000 (0:00:01.973) 0:39:30.230 ******* 2026-02-02 06:13:22.156047 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:13:22.156067 | orchestrator | 2026-02-02 06:13:22.156085 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-02 06:13:22.156104 | orchestrator | Monday 02 February 2026 06:13:04 +0000 (0:00:01.901) 0:39:32.132 ******* 2026-02-02 06:13:22.156124 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:13:22.156144 | orchestrator | 2026-02-02 06:13:22.156164 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-02 06:13:22.156210 | orchestrator | Monday 02 February 2026 06:13:06 +0000 (0:00:02.203) 0:39:34.335 ******* 2026-02-02 06:13:22.156232 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:13:22.156249 | orchestrator | 2026-02-02 06:13:22.156268 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-02 06:13:22.156301 | orchestrator | Monday 02 February 2026 06:13:07 +0000 (0:00:01.141) 0:39:35.477 ******* 2026-02-02 06:13:22.156322 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:13:22.156342 | orchestrator | 2026-02-02 06:13:22.156357 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-02 06:13:22.156368 | orchestrator | Monday 02 February 2026 06:13:09 +0000 (0:00:01.125) 0:39:36.602 ******* 2026-02-02 06:13:22.156379 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 06:13:22.156395 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-02-02 06:13:22.156415 | orchestrator | 2026-02-02 06:13:22.156434 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-02 06:13:22.156454 | orchestrator | Monday 02 February 2026 06:13:10 +0000 (0:00:01.877) 0:39:38.479 ******* 2026-02-02 06:13:22.156474 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 06:13:22.156494 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-02-02 06:13:22.156513 | orchestrator | 2026-02-02 06:13:22.156531 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-02 06:13:22.156542 | orchestrator | Monday 02 February 2026 06:13:13 +0000 (0:00:02.826) 0:39:41.306 ******* 2026-02-02 06:13:22.156553 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-02 06:13:22.156563 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-02 06:13:22.156574 | orchestrator | 2026-02-02 06:13:22.156585 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-02 06:13:22.156596 | orchestrator | Monday 02 February 2026 06:13:18 +0000 (0:00:04.653) 0:39:45.959 ******* 2026-02-02 06:13:22.156606 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:13:22.156617 | orchestrator | 2026-02-02 06:13:22.156628 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-02 06:13:22.156639 | orchestrator | Monday 02 February 2026 06:13:19 +0000 (0:00:01.236) 0:39:47.196 ******* 2026-02-02 06:13:22.156654 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:13:22.156674 | orchestrator | 2026-02-02 06:13:22.156693 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-02 06:13:22.156713 | orchestrator | Monday 02 February 2026 06:13:20 +0000 (0:00:01.241) 0:39:48.438 ******* 2026-02-02 06:13:22.156741 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:13:22.156761 | orchestrator | 2026-02-02 06:13:22.156792 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-02 06:14:08.784965 | orchestrator | Monday 02 February 2026 06:13:22 +0000 (0:00:01.283) 0:39:49.721 ******* 2026-02-02 06:14:08.785082 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:14:08.785098 | orchestrator | 2026-02-02 06:14:08.785112 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-02 06:14:08.785124 | orchestrator | Monday 02 February 2026 06:13:23 +0000 (0:00:01.179) 0:39:50.901 ******* 2026-02-02 06:14:08.785135 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:14:08.785146 | orchestrator | 2026-02-02 06:14:08.785157 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-02 06:14:08.785168 | orchestrator | Monday 02 February 2026 06:13:24 +0000 (0:00:01.205) 0:39:52.106 ******* 2026-02-02 06:14:08.785234 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-02 06:14:08.785247 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-02 06:14:08.785258 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-02 06:14:08.785269 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-02-02 06:14:08.785280 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:14:08.785291 | orchestrator | 2026-02-02 06:14:08.785302 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 06:14:08.785313 | orchestrator | Monday 02 February 2026 06:13:38 +0000 (0:00:13.973) 0:40:06.079 ******* 2026-02-02 06:14:08.785346 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:14:08.785359 | orchestrator | 2026-02-02 06:14:08.785371 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-02 06:14:08.785382 | orchestrator | Monday 02 February 2026 06:13:39 +0000 (0:00:01.132) 0:40:07.212 ******* 2026-02-02 06:14:08.785392 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:14:08.785403 | orchestrator | 2026-02-02 06:14:08.785414 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-02 06:14:08.785425 | orchestrator | Monday 02 February 2026 06:13:40 +0000 (0:00:01.127) 0:40:08.339 ******* 2026-02-02 06:14:08.785436 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:14:08.785446 | orchestrator | 2026-02-02 06:14:08.785457 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-02 06:14:08.785467 | orchestrator | Monday 02 February 2026 06:13:41 +0000 (0:00:01.165) 0:40:09.505 ******* 2026-02-02 06:14:08.785480 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:14:08.785493 | orchestrator | 2026-02-02 06:14:08.785505 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-02 06:14:08.785517 | orchestrator | Monday 02 February 2026 06:13:43 +0000 (0:00:01.126) 0:40:10.632 ******* 2026-02-02 06:14:08.785531 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:14:08.785543 | orchestrator | 2026-02-02 06:14:08.785556 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-02 06:14:08.785569 | orchestrator | Monday 02 February 2026 06:13:44 +0000 (0:00:01.112) 0:40:11.744 ******* 2026-02-02 06:14:08.785581 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:14:08.785594 | orchestrator | 2026-02-02 06:14:08.785607 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-02 06:14:08.785620 | orchestrator | Monday 02 February 2026 06:13:45 +0000 (0:00:01.198) 0:40:12.943 ******* 2026-02-02 06:14:08.785632 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:14:08.785645 | orchestrator | 2026-02-02 06:14:08.785657 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-02 06:14:08.785670 | orchestrator | 2026-02-02 06:14:08.785683 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 06:14:08.785695 | orchestrator | Monday 02 February 2026 06:13:46 +0000 (0:00:00.990) 0:40:13.933 ******* 2026-02-02 06:14:08.785708 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-02 06:14:08.785721 | orchestrator | 2026-02-02 06:14:08.785734 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 06:14:08.785746 | orchestrator | Monday 02 February 2026 06:13:47 +0000 (0:00:01.166) 0:40:15.099 ******* 2026-02-02 06:14:08.785759 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:08.785772 | orchestrator | 2026-02-02 06:14:08.785785 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 06:14:08.785798 | orchestrator | Monday 02 February 2026 06:13:48 +0000 (0:00:01.453) 0:40:16.553 ******* 2026-02-02 06:14:08.785811 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:08.785824 | orchestrator | 2026-02-02 06:14:08.785836 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:14:08.785847 | orchestrator | Monday 02 February 2026 06:13:50 +0000 (0:00:01.155) 0:40:17.708 ******* 2026-02-02 06:14:08.785857 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:08.785868 | orchestrator | 2026-02-02 06:14:08.785879 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:14:08.785889 | orchestrator | Monday 02 February 2026 06:13:51 +0000 (0:00:01.498) 0:40:19.207 ******* 2026-02-02 06:14:08.785900 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:08.785910 | orchestrator | 2026-02-02 06:14:08.785921 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 06:14:08.785932 | orchestrator | Monday 02 February 2026 06:13:52 +0000 (0:00:01.101) 0:40:20.308 ******* 2026-02-02 06:14:08.785943 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:08.785953 | orchestrator | 2026-02-02 06:14:08.785972 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 06:14:08.785983 | orchestrator | Monday 02 February 2026 06:13:53 +0000 (0:00:01.134) 0:40:21.443 ******* 2026-02-02 06:14:08.785994 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:08.786004 | orchestrator | 2026-02-02 06:14:08.786077 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 06:14:08.786121 | orchestrator | Monday 02 February 2026 06:13:54 +0000 (0:00:01.114) 0:40:22.558 ******* 2026-02-02 06:14:08.786205 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:08.786229 | orchestrator | 2026-02-02 06:14:08.786248 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 06:14:08.786267 | orchestrator | Monday 02 February 2026 06:13:56 +0000 (0:00:01.189) 0:40:23.747 ******* 2026-02-02 06:14:08.786284 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:08.786295 | orchestrator | 2026-02-02 06:14:08.786305 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 06:14:08.786316 | orchestrator | Monday 02 February 2026 06:13:57 +0000 (0:00:01.152) 0:40:24.900 ******* 2026-02-02 06:14:08.786327 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:14:08.786338 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:14:08.786366 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:14:08.786377 | orchestrator | 2026-02-02 06:14:08.786398 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 06:14:08.786410 | orchestrator | Monday 02 February 2026 06:13:59 +0000 (0:00:02.004) 0:40:26.904 ******* 2026-02-02 06:14:08.786420 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:08.786431 | orchestrator | 2026-02-02 06:14:08.786441 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 06:14:08.786452 | orchestrator | Monday 02 February 2026 06:14:00 +0000 (0:00:01.246) 0:40:28.151 ******* 2026-02-02 06:14:08.786462 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:14:08.786473 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:14:08.786484 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:14:08.786494 | orchestrator | 2026-02-02 06:14:08.786504 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 06:14:08.786515 | orchestrator | Monday 02 February 2026 06:14:03 +0000 (0:00:03.234) 0:40:31.385 ******* 2026-02-02 06:14:08.786526 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-02 06:14:08.786537 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-02 06:14:08.786547 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-02 06:14:08.786558 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:08.786569 | orchestrator | 2026-02-02 06:14:08.786579 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 06:14:08.786590 | orchestrator | Monday 02 February 2026 06:14:05 +0000 (0:00:01.799) 0:40:33.184 ******* 2026-02-02 06:14:08.786603 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 06:14:08.786617 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 06:14:08.786628 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 06:14:08.786653 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:08.786664 | orchestrator | 2026-02-02 06:14:08.786675 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 06:14:08.786686 | orchestrator | Monday 02 February 2026 06:14:07 +0000 (0:00:01.961) 0:40:35.146 ******* 2026-02-02 06:14:08.786699 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:08.786713 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:08.786731 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:08.786742 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:08.786753 | orchestrator | 2026-02-02 06:14:08.786772 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 06:14:27.783849 | orchestrator | Monday 02 February 2026 06:14:08 +0000 (0:00:01.206) 0:40:36.352 ******* 2026-02-02 06:14:27.783965 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 06:14:01.081384', 'end': '2026-02-02 06:14:01.132541', 'delta': '0:00:00.051157', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 06:14:27.783988 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'c530967d0aad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 06:14:01.977117', 'end': '2026-02-02 06:14:02.026665', 'delta': '0:00:00.049548', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c530967d0aad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 06:14:27.784001 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'a68c96a70534', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 06:14:02.564457', 'end': '2026-02-02 06:14:02.620193', 'delta': '0:00:00.055736', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a68c96a70534'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 06:14:27.784036 | orchestrator | 2026-02-02 06:14:27.784050 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 06:14:27.784061 | orchestrator | Monday 02 February 2026 06:14:10 +0000 (0:00:01.274) 0:40:37.627 ******* 2026-02-02 06:14:27.784073 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:27.784085 | orchestrator | 2026-02-02 06:14:27.784096 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 06:14:27.784106 | orchestrator | Monday 02 February 2026 06:14:11 +0000 (0:00:01.298) 0:40:38.926 ******* 2026-02-02 06:14:27.784117 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:27.784129 | orchestrator | 2026-02-02 06:14:27.784140 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 06:14:27.784151 | orchestrator | Monday 02 February 2026 06:14:12 +0000 (0:00:01.270) 0:40:40.197 ******* 2026-02-02 06:14:27.784161 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:27.784202 | orchestrator | 2026-02-02 06:14:27.784213 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 06:14:27.784224 | orchestrator | Monday 02 February 2026 06:14:13 +0000 (0:00:01.218) 0:40:41.415 ******* 2026-02-02 06:14:27.784235 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:14:27.784246 | orchestrator | 2026-02-02 06:14:27.784257 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:14:27.784268 | orchestrator | Monday 02 February 2026 06:14:15 +0000 (0:00:02.021) 0:40:43.436 ******* 2026-02-02 06:14:27.784278 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:27.784289 | orchestrator | 2026-02-02 06:14:27.784300 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 06:14:27.784310 | orchestrator | Monday 02 February 2026 06:14:17 +0000 (0:00:01.185) 0:40:44.622 ******* 2026-02-02 06:14:27.784321 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:27.784332 | orchestrator | 2026-02-02 06:14:27.784343 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 06:14:27.784353 | orchestrator | Monday 02 February 2026 06:14:18 +0000 (0:00:01.122) 0:40:45.744 ******* 2026-02-02 06:14:27.784365 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:27.784377 | orchestrator | 2026-02-02 06:14:27.784390 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:14:27.784402 | orchestrator | Monday 02 February 2026 06:14:19 +0000 (0:00:01.223) 0:40:46.967 ******* 2026-02-02 06:14:27.784430 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:27.784443 | orchestrator | 2026-02-02 06:14:27.784456 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 06:14:27.784484 | orchestrator | Monday 02 February 2026 06:14:20 +0000 (0:00:01.124) 0:40:48.092 ******* 2026-02-02 06:14:27.784498 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:27.784510 | orchestrator | 2026-02-02 06:14:27.784523 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 06:14:27.784536 | orchestrator | Monday 02 February 2026 06:14:21 +0000 (0:00:01.125) 0:40:49.218 ******* 2026-02-02 06:14:27.784549 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:27.784561 | orchestrator | 2026-02-02 06:14:27.784574 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 06:14:27.784584 | orchestrator | Monday 02 February 2026 06:14:22 +0000 (0:00:01.151) 0:40:50.369 ******* 2026-02-02 06:14:27.784595 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:27.784605 | orchestrator | 2026-02-02 06:14:27.784616 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 06:14:27.784627 | orchestrator | Monday 02 February 2026 06:14:23 +0000 (0:00:01.113) 0:40:51.482 ******* 2026-02-02 06:14:27.784637 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:27.784648 | orchestrator | 2026-02-02 06:14:27.784669 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 06:14:27.784680 | orchestrator | Monday 02 February 2026 06:14:25 +0000 (0:00:01.286) 0:40:52.769 ******* 2026-02-02 06:14:27.784690 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:27.784701 | orchestrator | 2026-02-02 06:14:27.784712 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 06:14:27.784723 | orchestrator | Monday 02 February 2026 06:14:26 +0000 (0:00:01.111) 0:40:53.881 ******* 2026-02-02 06:14:27.784734 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:27.784745 | orchestrator | 2026-02-02 06:14:27.784755 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 06:14:27.784766 | orchestrator | Monday 02 February 2026 06:14:27 +0000 (0:00:01.207) 0:40:55.089 ******* 2026-02-02 06:14:27.784778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:14:27.784792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19', 'dm-uuid-LVM-7fojGdQjjxzlZ1d67G3lfXV0uQvvNrpG74l8TP6AWG5LY1LTlUkEVjmQPc2hTMkL'], 'uuids': ['0037b285-4ac2-45c2-8d5f-985073fa4cde'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL']}})  2026-02-02 06:14:27.784805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012', 'scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '076229ff', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:14:27.784817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AITawh-CkpC-7L3c-Vqqe-GXUP-7eEh-WwcXRH', 'scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5', 'scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89']}})  2026-02-02 06:14:27.784843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:14:29.113429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:14:29.113562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 06:14:29.113582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:14:29.113595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom', 'dm-uuid-CRYPT-LUKS2-6399826b15f3492994c0bc4d1d3bf1c1-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:14:29.113606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:14:29.113619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89', 'dm-uuid-LVM-bGXwDmNnGJLl15xDO66UDgeGoDbpg8C0HvMSdsO6YcSLb4aDqGATNEcOudg8iQom'], 'uuids': ['6399826b-15f3-4929-94c0-bc4d1d3bf1c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom']}})  2026-02-02 06:14:29.113646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QbZaLy-yUYT-ccut-PcI7-2pGL-9PmJ-6NoPFr', 'scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28', 'scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19']}})  2026-02-02 06:14:29.113677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:14:29.113701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d8209b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:14:29.113715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:14:29.113726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:14:29.113743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL', 'dm-uuid-CRYPT-LUKS2-0037b2854ac245c28d5f985073fa4cde-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:14:29.113762 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:29.113776 | orchestrator | 2026-02-02 06:14:29.113788 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 06:14:29.113800 | orchestrator | Monday 02 February 2026 06:14:28 +0000 (0:00:01.387) 0:40:56.476 ******* 2026-02-02 06:14:29.113820 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:30.326507 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19', 'dm-uuid-LVM-7fojGdQjjxzlZ1d67G3lfXV0uQvvNrpG74l8TP6AWG5LY1LTlUkEVjmQPc2hTMkL'], 'uuids': ['0037b285-4ac2-45c2-8d5f-985073fa4cde'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:30.326579 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012', 'scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '076229ff', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:30.326588 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AITawh-CkpC-7L3c-Vqqe-GXUP-7eEh-WwcXRH', 'scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5', 'scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:30.326613 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:30.326641 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:30.326661 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:30.326666 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:30.326670 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom', 'dm-uuid-CRYPT-LUKS2-6399826b15f3492994c0bc4d1d3bf1c1-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:30.326674 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:30.326682 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89', 'dm-uuid-LVM-bGXwDmNnGJLl15xDO66UDgeGoDbpg8C0HvMSdsO6YcSLb4aDqGATNEcOudg8iQom'], 'uuids': ['6399826b-15f3-4929-94c0-bc4d1d3bf1c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:30.326696 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QbZaLy-yUYT-ccut-PcI7-2pGL-9PmJ-6NoPFr', 'scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28', 'scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:49.167047 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:49.167239 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d8209b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:49.167287 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:49.167319 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:49.167333 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL', 'dm-uuid-CRYPT-LUKS2-0037b2854ac245c28d5f985073fa4cde-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:14:49.167345 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:49.167358 | orchestrator | 2026-02-02 06:14:49.167370 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 06:14:49.167382 | orchestrator | Monday 02 February 2026 06:14:30 +0000 (0:00:01.425) 0:40:57.902 ******* 2026-02-02 06:14:49.167393 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:49.167404 | orchestrator | 2026-02-02 06:14:49.167415 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 06:14:49.167426 | orchestrator | Monday 02 February 2026 06:14:31 +0000 (0:00:01.502) 0:40:59.405 ******* 2026-02-02 06:14:49.167437 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:49.167448 | orchestrator | 2026-02-02 06:14:49.167459 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:14:49.167470 | orchestrator | Monday 02 February 2026 06:14:32 +0000 (0:00:01.130) 0:41:00.535 ******* 2026-02-02 06:14:49.167481 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:49.167491 | orchestrator | 2026-02-02 06:14:49.167502 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:14:49.167513 | orchestrator | Monday 02 February 2026 06:14:34 +0000 (0:00:01.461) 0:41:01.997 ******* 2026-02-02 06:14:49.167524 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:49.167534 | orchestrator | 2026-02-02 06:14:49.167546 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:14:49.167557 | orchestrator | Monday 02 February 2026 06:14:35 +0000 (0:00:01.157) 0:41:03.155 ******* 2026-02-02 06:14:49.167576 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:49.167590 | orchestrator | 2026-02-02 06:14:49.167602 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:14:49.167615 | orchestrator | Monday 02 February 2026 06:14:36 +0000 (0:00:01.288) 0:41:04.443 ******* 2026-02-02 06:14:49.167627 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:49.167640 | orchestrator | 2026-02-02 06:14:49.167653 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 06:14:49.167665 | orchestrator | Monday 02 February 2026 06:14:38 +0000 (0:00:01.210) 0:41:05.653 ******* 2026-02-02 06:14:49.167677 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-02 06:14:49.167690 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-02 06:14:49.167703 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-02 06:14:49.167715 | orchestrator | 2026-02-02 06:14:49.167729 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 06:14:49.167741 | orchestrator | Monday 02 February 2026 06:14:40 +0000 (0:00:02.333) 0:41:07.987 ******* 2026-02-02 06:14:49.167754 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-02 06:14:49.167766 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-02 06:14:49.167779 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-02 06:14:49.167791 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:49.167803 | orchestrator | 2026-02-02 06:14:49.167815 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 06:14:49.167833 | orchestrator | Monday 02 February 2026 06:14:41 +0000 (0:00:01.211) 0:41:09.198 ******* 2026-02-02 06:14:49.167846 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-02 06:14:49.167859 | orchestrator | 2026-02-02 06:14:49.167872 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:14:49.167887 | orchestrator | Monday 02 February 2026 06:14:42 +0000 (0:00:01.328) 0:41:10.527 ******* 2026-02-02 06:14:49.167900 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:49.167912 | orchestrator | 2026-02-02 06:14:49.167924 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:14:49.167937 | orchestrator | Monday 02 February 2026 06:14:44 +0000 (0:00:01.170) 0:41:11.697 ******* 2026-02-02 06:14:49.167949 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:49.167960 | orchestrator | 2026-02-02 06:14:49.167971 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:14:49.167981 | orchestrator | Monday 02 February 2026 06:14:45 +0000 (0:00:01.201) 0:41:12.899 ******* 2026-02-02 06:14:49.167992 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:14:49.168003 | orchestrator | 2026-02-02 06:14:49.168014 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:14:49.168025 | orchestrator | Monday 02 February 2026 06:14:46 +0000 (0:00:01.169) 0:41:14.068 ******* 2026-02-02 06:14:49.168035 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:14:49.168046 | orchestrator | 2026-02-02 06:14:49.168057 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:14:49.168068 | orchestrator | Monday 02 February 2026 06:14:47 +0000 (0:00:01.259) 0:41:15.328 ******* 2026-02-02 06:14:49.168085 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 06:15:30.169970 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 06:15:30.170147 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 06:15:30.170236 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:15:30.170252 | orchestrator | 2026-02-02 06:15:30.170265 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:15:30.170278 | orchestrator | Monday 02 February 2026 06:14:49 +0000 (0:00:01.410) 0:41:16.738 ******* 2026-02-02 06:15:30.170315 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 06:15:30.170327 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 06:15:30.170337 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 06:15:30.170348 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:15:30.170359 | orchestrator | 2026-02-02 06:15:30.170370 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:15:30.170381 | orchestrator | Monday 02 February 2026 06:14:50 +0000 (0:00:01.528) 0:41:18.266 ******* 2026-02-02 06:15:30.170392 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 06:15:30.170402 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 06:15:30.170413 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 06:15:30.170423 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:15:30.170434 | orchestrator | 2026-02-02 06:15:30.170444 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:15:30.170455 | orchestrator | Monday 02 February 2026 06:14:52 +0000 (0:00:01.456) 0:41:19.723 ******* 2026-02-02 06:15:30.170466 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.170477 | orchestrator | 2026-02-02 06:15:30.170488 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:15:30.170499 | orchestrator | Monday 02 February 2026 06:14:53 +0000 (0:00:01.183) 0:41:20.907 ******* 2026-02-02 06:15:30.170512 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-02 06:15:30.170525 | orchestrator | 2026-02-02 06:15:30.170538 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 06:15:30.170550 | orchestrator | Monday 02 February 2026 06:14:54 +0000 (0:00:01.335) 0:41:22.242 ******* 2026-02-02 06:15:30.170563 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:15:30.170576 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:15:30.170588 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:15:30.170600 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:15:30.170612 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-02 06:15:30.170624 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:15:30.170637 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:15:30.170649 | orchestrator | 2026-02-02 06:15:30.170661 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 06:15:30.170673 | orchestrator | Monday 02 February 2026 06:14:57 +0000 (0:00:02.508) 0:41:24.751 ******* 2026-02-02 06:15:30.170685 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:15:30.170697 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:15:30.170709 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:15:30.170722 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:15:30.170734 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-02 06:15:30.170747 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:15:30.170772 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:15:30.170785 | orchestrator | 2026-02-02 06:15:30.170798 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-02 06:15:30.170810 | orchestrator | Monday 02 February 2026 06:14:59 +0000 (0:00:02.563) 0:41:27.314 ******* 2026-02-02 06:15:30.170822 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.170834 | orchestrator | 2026-02-02 06:15:30.170846 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-02 06:15:30.170867 | orchestrator | Monday 02 February 2026 06:15:00 +0000 (0:00:01.232) 0:41:28.547 ******* 2026-02-02 06:15:30.170878 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.170889 | orchestrator | 2026-02-02 06:15:30.170899 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-02 06:15:30.170910 | orchestrator | Monday 02 February 2026 06:15:01 +0000 (0:00:00.880) 0:41:29.427 ******* 2026-02-02 06:15:30.170920 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.170931 | orchestrator | 2026-02-02 06:15:30.170942 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-02 06:15:30.170952 | orchestrator | Monday 02 February 2026 06:15:02 +0000 (0:00:00.955) 0:41:30.383 ******* 2026-02-02 06:15:30.170963 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-02 06:15:30.170974 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-02 06:15:30.170984 | orchestrator | 2026-02-02 06:15:30.170995 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 06:15:30.171006 | orchestrator | Monday 02 February 2026 06:15:06 +0000 (0:00:03.843) 0:41:34.226 ******* 2026-02-02 06:15:30.171016 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-02 06:15:30.171028 | orchestrator | 2026-02-02 06:15:30.171039 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 06:15:30.171069 | orchestrator | Monday 02 February 2026 06:15:07 +0000 (0:00:01.115) 0:41:35.341 ******* 2026-02-02 06:15:30.171080 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-02 06:15:30.171091 | orchestrator | 2026-02-02 06:15:30.171102 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 06:15:30.171113 | orchestrator | Monday 02 February 2026 06:15:08 +0000 (0:00:01.154) 0:41:36.496 ******* 2026-02-02 06:15:30.171123 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:15:30.171134 | orchestrator | 2026-02-02 06:15:30.171145 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 06:15:30.171155 | orchestrator | Monday 02 February 2026 06:15:10 +0000 (0:00:01.153) 0:41:37.649 ******* 2026-02-02 06:15:30.171189 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.171200 | orchestrator | 2026-02-02 06:15:30.171211 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 06:15:30.171221 | orchestrator | Monday 02 February 2026 06:15:11 +0000 (0:00:01.496) 0:41:39.146 ******* 2026-02-02 06:15:30.171232 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.171243 | orchestrator | 2026-02-02 06:15:30.171253 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 06:15:30.171264 | orchestrator | Monday 02 February 2026 06:15:13 +0000 (0:00:01.517) 0:41:40.663 ******* 2026-02-02 06:15:30.171275 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.171285 | orchestrator | 2026-02-02 06:15:30.171296 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 06:15:30.171307 | orchestrator | Monday 02 February 2026 06:15:14 +0000 (0:00:01.557) 0:41:42.221 ******* 2026-02-02 06:15:30.171317 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:15:30.171328 | orchestrator | 2026-02-02 06:15:30.171339 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 06:15:30.171349 | orchestrator | Monday 02 February 2026 06:15:15 +0000 (0:00:01.183) 0:41:43.404 ******* 2026-02-02 06:15:30.171360 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:15:30.171371 | orchestrator | 2026-02-02 06:15:30.171381 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 06:15:30.171392 | orchestrator | Monday 02 February 2026 06:15:17 +0000 (0:00:01.238) 0:41:44.643 ******* 2026-02-02 06:15:30.171403 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:15:30.171413 | orchestrator | 2026-02-02 06:15:30.171424 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 06:15:30.171435 | orchestrator | Monday 02 February 2026 06:15:18 +0000 (0:00:01.193) 0:41:45.836 ******* 2026-02-02 06:15:30.171453 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.171464 | orchestrator | 2026-02-02 06:15:30.171474 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 06:15:30.171485 | orchestrator | Monday 02 February 2026 06:15:19 +0000 (0:00:01.546) 0:41:47.383 ******* 2026-02-02 06:15:30.171495 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.171506 | orchestrator | 2026-02-02 06:15:30.171517 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 06:15:30.171527 | orchestrator | Monday 02 February 2026 06:15:21 +0000 (0:00:01.519) 0:41:48.903 ******* 2026-02-02 06:15:30.171538 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:15:30.171549 | orchestrator | 2026-02-02 06:15:30.171560 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 06:15:30.171570 | orchestrator | Monday 02 February 2026 06:15:22 +0000 (0:00:00.774) 0:41:49.677 ******* 2026-02-02 06:15:30.171581 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:15:30.171591 | orchestrator | 2026-02-02 06:15:30.171602 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 06:15:30.171613 | orchestrator | Monday 02 February 2026 06:15:22 +0000 (0:00:00.772) 0:41:50.450 ******* 2026-02-02 06:15:30.171623 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.171634 | orchestrator | 2026-02-02 06:15:30.171644 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 06:15:30.171655 | orchestrator | Monday 02 February 2026 06:15:23 +0000 (0:00:00.776) 0:41:51.226 ******* 2026-02-02 06:15:30.171666 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.171676 | orchestrator | 2026-02-02 06:15:30.171687 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 06:15:30.171703 | orchestrator | Monday 02 February 2026 06:15:24 +0000 (0:00:00.808) 0:41:52.035 ******* 2026-02-02 06:15:30.171714 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.171725 | orchestrator | 2026-02-02 06:15:30.171736 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 06:15:30.171746 | orchestrator | Monday 02 February 2026 06:15:25 +0000 (0:00:00.832) 0:41:52.867 ******* 2026-02-02 06:15:30.171757 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:15:30.171768 | orchestrator | 2026-02-02 06:15:30.171778 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 06:15:30.171788 | orchestrator | Monday 02 February 2026 06:15:26 +0000 (0:00:00.763) 0:41:53.630 ******* 2026-02-02 06:15:30.171799 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:15:30.171810 | orchestrator | 2026-02-02 06:15:30.171820 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 06:15:30.171831 | orchestrator | Monday 02 February 2026 06:15:26 +0000 (0:00:00.771) 0:41:54.402 ******* 2026-02-02 06:15:30.171841 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:15:30.171852 | orchestrator | 2026-02-02 06:15:30.171862 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 06:15:30.171873 | orchestrator | Monday 02 February 2026 06:15:27 +0000 (0:00:00.782) 0:41:55.184 ******* 2026-02-02 06:15:30.171883 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.171894 | orchestrator | 2026-02-02 06:15:30.171904 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 06:15:30.171915 | orchestrator | Monday 02 February 2026 06:15:28 +0000 (0:00:00.803) 0:41:55.988 ******* 2026-02-02 06:15:30.171925 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:15:30.171936 | orchestrator | 2026-02-02 06:15:30.171946 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 06:15:30.171957 | orchestrator | Monday 02 February 2026 06:15:29 +0000 (0:00:00.941) 0:41:56.930 ******* 2026-02-02 06:15:30.171974 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.153550 | orchestrator | 2026-02-02 06:16:13.153646 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 06:16:13.153659 | orchestrator | Monday 02 February 2026 06:15:30 +0000 (0:00:00.811) 0:41:57.742 ******* 2026-02-02 06:16:13.153687 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.153696 | orchestrator | 2026-02-02 06:16:13.153704 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 06:16:13.153712 | orchestrator | Monday 02 February 2026 06:15:30 +0000 (0:00:00.810) 0:41:58.553 ******* 2026-02-02 06:16:13.153719 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.153727 | orchestrator | 2026-02-02 06:16:13.153734 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 06:16:13.153742 | orchestrator | Monday 02 February 2026 06:15:31 +0000 (0:00:00.803) 0:41:59.357 ******* 2026-02-02 06:16:13.153749 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.153756 | orchestrator | 2026-02-02 06:16:13.153763 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 06:16:13.153770 | orchestrator | Monday 02 February 2026 06:15:32 +0000 (0:00:00.835) 0:42:00.192 ******* 2026-02-02 06:16:13.153778 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.153785 | orchestrator | 2026-02-02 06:16:13.153792 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 06:16:13.153799 | orchestrator | Monday 02 February 2026 06:15:33 +0000 (0:00:00.799) 0:42:00.991 ******* 2026-02-02 06:16:13.153806 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.153814 | orchestrator | 2026-02-02 06:16:13.153821 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 06:16:13.153828 | orchestrator | Monday 02 February 2026 06:15:34 +0000 (0:00:00.828) 0:42:01.820 ******* 2026-02-02 06:16:13.153835 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.153842 | orchestrator | 2026-02-02 06:16:13.153850 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 06:16:13.153858 | orchestrator | Monday 02 February 2026 06:15:34 +0000 (0:00:00.748) 0:42:02.568 ******* 2026-02-02 06:16:13.153865 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.153872 | orchestrator | 2026-02-02 06:16:13.153880 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 06:16:13.153887 | orchestrator | Monday 02 February 2026 06:15:35 +0000 (0:00:00.785) 0:42:03.354 ******* 2026-02-02 06:16:13.153894 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.153901 | orchestrator | 2026-02-02 06:16:13.153908 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 06:16:13.153916 | orchestrator | Monday 02 February 2026 06:15:36 +0000 (0:00:00.757) 0:42:04.112 ******* 2026-02-02 06:16:13.153923 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.153930 | orchestrator | 2026-02-02 06:16:13.153937 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 06:16:13.153944 | orchestrator | Monday 02 February 2026 06:15:37 +0000 (0:00:00.762) 0:42:04.874 ******* 2026-02-02 06:16:13.153952 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.153959 | orchestrator | 2026-02-02 06:16:13.153966 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 06:16:13.153973 | orchestrator | Monday 02 February 2026 06:15:38 +0000 (0:00:00.753) 0:42:05.628 ******* 2026-02-02 06:16:13.153980 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.153987 | orchestrator | 2026-02-02 06:16:13.153994 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 06:16:13.154002 | orchestrator | Monday 02 February 2026 06:15:39 +0000 (0:00:01.030) 0:42:06.658 ******* 2026-02-02 06:16:13.154009 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:16:13.154062 | orchestrator | 2026-02-02 06:16:13.154070 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 06:16:13.154077 | orchestrator | Monday 02 February 2026 06:15:40 +0000 (0:00:01.576) 0:42:08.235 ******* 2026-02-02 06:16:13.154085 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:16:13.154092 | orchestrator | 2026-02-02 06:16:13.154099 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 06:16:13.154107 | orchestrator | Monday 02 February 2026 06:15:42 +0000 (0:00:01.878) 0:42:10.113 ******* 2026-02-02 06:16:13.154132 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-02 06:16:13.154141 | orchestrator | 2026-02-02 06:16:13.154148 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 06:16:13.154156 | orchestrator | Monday 02 February 2026 06:15:43 +0000 (0:00:01.201) 0:42:11.315 ******* 2026-02-02 06:16:13.154185 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154193 | orchestrator | 2026-02-02 06:16:13.154208 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 06:16:13.154215 | orchestrator | Monday 02 February 2026 06:15:44 +0000 (0:00:01.109) 0:42:12.425 ******* 2026-02-02 06:16:13.154222 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154230 | orchestrator | 2026-02-02 06:16:13.154237 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 06:16:13.154244 | orchestrator | Monday 02 February 2026 06:15:45 +0000 (0:00:01.146) 0:42:13.571 ******* 2026-02-02 06:16:13.154255 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 06:16:13.154267 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 06:16:13.154279 | orchestrator | 2026-02-02 06:16:13.154290 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 06:16:13.154302 | orchestrator | Monday 02 February 2026 06:15:47 +0000 (0:00:01.822) 0:42:15.393 ******* 2026-02-02 06:16:13.154313 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:16:13.154325 | orchestrator | 2026-02-02 06:16:13.154336 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 06:16:13.154348 | orchestrator | Monday 02 February 2026 06:15:49 +0000 (0:00:01.539) 0:42:16.933 ******* 2026-02-02 06:16:13.154360 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154372 | orchestrator | 2026-02-02 06:16:13.154403 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 06:16:13.154416 | orchestrator | Monday 02 February 2026 06:15:50 +0000 (0:00:01.153) 0:42:18.087 ******* 2026-02-02 06:16:13.154429 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154439 | orchestrator | 2026-02-02 06:16:13.154447 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 06:16:13.154454 | orchestrator | Monday 02 February 2026 06:15:51 +0000 (0:00:00.851) 0:42:18.938 ******* 2026-02-02 06:16:13.154461 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154468 | orchestrator | 2026-02-02 06:16:13.154475 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 06:16:13.154482 | orchestrator | Monday 02 February 2026 06:15:52 +0000 (0:00:00.902) 0:42:19.841 ******* 2026-02-02 06:16:13.154490 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-02 06:16:13.154497 | orchestrator | 2026-02-02 06:16:13.154504 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 06:16:13.154511 | orchestrator | Monday 02 February 2026 06:15:53 +0000 (0:00:01.282) 0:42:21.123 ******* 2026-02-02 06:16:13.154518 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:16:13.154525 | orchestrator | 2026-02-02 06:16:13.154533 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 06:16:13.154540 | orchestrator | Monday 02 February 2026 06:15:55 +0000 (0:00:01.751) 0:42:22.875 ******* 2026-02-02 06:16:13.154547 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 06:16:13.154554 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 06:16:13.154561 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 06:16:13.154568 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154575 | orchestrator | 2026-02-02 06:16:13.154583 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 06:16:13.154590 | orchestrator | Monday 02 February 2026 06:15:56 +0000 (0:00:01.171) 0:42:24.046 ******* 2026-02-02 06:16:13.154604 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154611 | orchestrator | 2026-02-02 06:16:13.154618 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 06:16:13.154625 | orchestrator | Monday 02 February 2026 06:15:57 +0000 (0:00:01.116) 0:42:25.163 ******* 2026-02-02 06:16:13.154633 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154640 | orchestrator | 2026-02-02 06:16:13.154647 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 06:16:13.154654 | orchestrator | Monday 02 February 2026 06:15:58 +0000 (0:00:01.254) 0:42:26.417 ******* 2026-02-02 06:16:13.154661 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154669 | orchestrator | 2026-02-02 06:16:13.154676 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 06:16:13.154683 | orchestrator | Monday 02 February 2026 06:15:59 +0000 (0:00:01.162) 0:42:27.580 ******* 2026-02-02 06:16:13.154690 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154697 | orchestrator | 2026-02-02 06:16:13.154704 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 06:16:13.154711 | orchestrator | Monday 02 February 2026 06:16:01 +0000 (0:00:01.181) 0:42:28.762 ******* 2026-02-02 06:16:13.154719 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154726 | orchestrator | 2026-02-02 06:16:13.154733 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 06:16:13.154740 | orchestrator | Monday 02 February 2026 06:16:01 +0000 (0:00:00.770) 0:42:29.533 ******* 2026-02-02 06:16:13.154747 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:16:13.154754 | orchestrator | 2026-02-02 06:16:13.154761 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 06:16:13.154769 | orchestrator | Monday 02 February 2026 06:16:04 +0000 (0:00:02.126) 0:42:31.659 ******* 2026-02-02 06:16:13.154776 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:16:13.154783 | orchestrator | 2026-02-02 06:16:13.154790 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 06:16:13.154797 | orchestrator | Monday 02 February 2026 06:16:04 +0000 (0:00:00.778) 0:42:32.438 ******* 2026-02-02 06:16:13.154809 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-02 06:16:13.154816 | orchestrator | 2026-02-02 06:16:13.154824 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 06:16:13.154831 | orchestrator | Monday 02 February 2026 06:16:06 +0000 (0:00:01.155) 0:42:33.593 ******* 2026-02-02 06:16:13.154838 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154845 | orchestrator | 2026-02-02 06:16:13.154852 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 06:16:13.154859 | orchestrator | Monday 02 February 2026 06:16:07 +0000 (0:00:01.167) 0:42:34.761 ******* 2026-02-02 06:16:13.154866 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154874 | orchestrator | 2026-02-02 06:16:13.154881 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 06:16:13.154888 | orchestrator | Monday 02 February 2026 06:16:08 +0000 (0:00:01.330) 0:42:36.092 ******* 2026-02-02 06:16:13.154895 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154902 | orchestrator | 2026-02-02 06:16:13.154909 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 06:16:13.154916 | orchestrator | Monday 02 February 2026 06:16:09 +0000 (0:00:01.169) 0:42:37.261 ******* 2026-02-02 06:16:13.154924 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154931 | orchestrator | 2026-02-02 06:16:13.154938 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 06:16:13.154945 | orchestrator | Monday 02 February 2026 06:16:10 +0000 (0:00:01.140) 0:42:38.401 ******* 2026-02-02 06:16:13.154952 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:13.154959 | orchestrator | 2026-02-02 06:16:13.154966 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 06:16:13.154978 | orchestrator | Monday 02 February 2026 06:16:11 +0000 (0:00:01.177) 0:42:39.579 ******* 2026-02-02 06:16:13.154991 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.636218 | orchestrator | 2026-02-02 06:16:54.636330 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 06:16:54.636346 | orchestrator | Monday 02 February 2026 06:16:13 +0000 (0:00:01.143) 0:42:40.723 ******* 2026-02-02 06:16:54.636357 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.636367 | orchestrator | 2026-02-02 06:16:54.636377 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 06:16:54.636387 | orchestrator | Monday 02 February 2026 06:16:14 +0000 (0:00:01.130) 0:42:41.854 ******* 2026-02-02 06:16:54.636396 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.636406 | orchestrator | 2026-02-02 06:16:54.636415 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 06:16:54.636425 | orchestrator | Monday 02 February 2026 06:16:15 +0000 (0:00:01.148) 0:42:43.002 ******* 2026-02-02 06:16:54.636435 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:16:54.636445 | orchestrator | 2026-02-02 06:16:54.636454 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 06:16:54.636464 | orchestrator | Monday 02 February 2026 06:16:16 +0000 (0:00:00.812) 0:42:43.815 ******* 2026-02-02 06:16:54.636474 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-02 06:16:54.636484 | orchestrator | 2026-02-02 06:16:54.636493 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 06:16:54.636504 | orchestrator | Monday 02 February 2026 06:16:17 +0000 (0:00:01.160) 0:42:44.975 ******* 2026-02-02 06:16:54.636514 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-02 06:16:54.636524 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-02 06:16:54.636533 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-02 06:16:54.636543 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-02 06:16:54.636552 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-02 06:16:54.636561 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-02 06:16:54.636571 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-02 06:16:54.636580 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-02 06:16:54.636589 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 06:16:54.636599 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 06:16:54.636608 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 06:16:54.636618 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 06:16:54.636627 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 06:16:54.636637 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 06:16:54.636646 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-02 06:16:54.636655 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-02 06:16:54.636665 | orchestrator | 2026-02-02 06:16:54.636674 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 06:16:54.636684 | orchestrator | Monday 02 February 2026 06:16:23 +0000 (0:00:06.128) 0:42:51.104 ******* 2026-02-02 06:16:54.636693 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-02 06:16:54.636703 | orchestrator | 2026-02-02 06:16:54.636714 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-02 06:16:54.636725 | orchestrator | Monday 02 February 2026 06:16:24 +0000 (0:00:01.249) 0:42:52.353 ******* 2026-02-02 06:16:54.636737 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 06:16:54.636749 | orchestrator | 2026-02-02 06:16:54.636783 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-02 06:16:54.636794 | orchestrator | Monday 02 February 2026 06:16:26 +0000 (0:00:01.529) 0:42:53.882 ******* 2026-02-02 06:16:54.636820 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 06:16:54.636831 | orchestrator | 2026-02-02 06:16:54.636841 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 06:16:54.636850 | orchestrator | Monday 02 February 2026 06:16:27 +0000 (0:00:01.618) 0:42:55.501 ******* 2026-02-02 06:16:54.636859 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.636869 | orchestrator | 2026-02-02 06:16:54.636878 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 06:16:54.636887 | orchestrator | Monday 02 February 2026 06:16:28 +0000 (0:00:00.790) 0:42:56.292 ******* 2026-02-02 06:16:54.636897 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.636906 | orchestrator | 2026-02-02 06:16:54.636915 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 06:16:54.636925 | orchestrator | Monday 02 February 2026 06:16:29 +0000 (0:00:00.798) 0:42:57.090 ******* 2026-02-02 06:16:54.636935 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.636952 | orchestrator | 2026-02-02 06:16:54.636967 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 06:16:54.636982 | orchestrator | Monday 02 February 2026 06:16:30 +0000 (0:00:00.761) 0:42:57.852 ******* 2026-02-02 06:16:54.636999 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.637016 | orchestrator | 2026-02-02 06:16:54.637033 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 06:16:54.637048 | orchestrator | Monday 02 February 2026 06:16:31 +0000 (0:00:00.786) 0:42:58.638 ******* 2026-02-02 06:16:54.637064 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.637078 | orchestrator | 2026-02-02 06:16:54.637094 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 06:16:54.637109 | orchestrator | Monday 02 February 2026 06:16:31 +0000 (0:00:00.802) 0:42:59.441 ******* 2026-02-02 06:16:54.637148 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.637192 | orchestrator | 2026-02-02 06:16:54.637210 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 06:16:54.637229 | orchestrator | Monday 02 February 2026 06:16:32 +0000 (0:00:00.765) 0:43:00.206 ******* 2026-02-02 06:16:54.637247 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.637266 | orchestrator | 2026-02-02 06:16:54.637284 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 06:16:54.637303 | orchestrator | Monday 02 February 2026 06:16:33 +0000 (0:00:00.797) 0:43:01.003 ******* 2026-02-02 06:16:54.637321 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.637339 | orchestrator | 2026-02-02 06:16:54.637356 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 06:16:54.637373 | orchestrator | Monday 02 February 2026 06:16:34 +0000 (0:00:00.775) 0:43:01.779 ******* 2026-02-02 06:16:54.637383 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.637392 | orchestrator | 2026-02-02 06:16:54.637402 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 06:16:54.637411 | orchestrator | Monday 02 February 2026 06:16:34 +0000 (0:00:00.799) 0:43:02.579 ******* 2026-02-02 06:16:54.637420 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.637430 | orchestrator | 2026-02-02 06:16:54.637439 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 06:16:54.637449 | orchestrator | Monday 02 February 2026 06:16:35 +0000 (0:00:00.810) 0:43:03.389 ******* 2026-02-02 06:16:54.637458 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:16:54.637468 | orchestrator | 2026-02-02 06:16:54.637478 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 06:16:54.637498 | orchestrator | Monday 02 February 2026 06:16:36 +0000 (0:00:00.936) 0:43:04.325 ******* 2026-02-02 06:16:54.637508 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-02 06:16:54.637517 | orchestrator | 2026-02-02 06:16:54.637527 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 06:16:54.637536 | orchestrator | Monday 02 February 2026 06:16:40 +0000 (0:00:03.971) 0:43:08.297 ******* 2026-02-02 06:16:54.637546 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 06:16:54.637556 | orchestrator | 2026-02-02 06:16:54.637566 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 06:16:54.637575 | orchestrator | Monday 02 February 2026 06:16:41 +0000 (0:00:00.813) 0:43:09.111 ******* 2026-02-02 06:16:54.637587 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-02 06:16:54.637599 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-02 06:16:54.637610 | orchestrator | 2026-02-02 06:16:54.637620 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 06:16:54.637629 | orchestrator | Monday 02 February 2026 06:16:48 +0000 (0:00:07.125) 0:43:16.236 ******* 2026-02-02 06:16:54.637639 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.637648 | orchestrator | 2026-02-02 06:16:54.637658 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 06:16:54.637675 | orchestrator | Monday 02 February 2026 06:16:49 +0000 (0:00:00.782) 0:43:17.019 ******* 2026-02-02 06:16:54.637685 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.637694 | orchestrator | 2026-02-02 06:16:54.637704 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:16:54.637713 | orchestrator | Monday 02 February 2026 06:16:50 +0000 (0:00:00.789) 0:43:17.809 ******* 2026-02-02 06:16:54.637723 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.637732 | orchestrator | 2026-02-02 06:16:54.637742 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:16:54.637752 | orchestrator | Monday 02 February 2026 06:16:51 +0000 (0:00:00.802) 0:43:18.611 ******* 2026-02-02 06:16:54.637761 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.637771 | orchestrator | 2026-02-02 06:16:54.637780 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:16:54.637790 | orchestrator | Monday 02 February 2026 06:16:51 +0000 (0:00:00.811) 0:43:19.423 ******* 2026-02-02 06:16:54.637799 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:16:54.637809 | orchestrator | 2026-02-02 06:16:54.637818 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:16:54.637827 | orchestrator | Monday 02 February 2026 06:16:52 +0000 (0:00:00.807) 0:43:20.231 ******* 2026-02-02 06:16:54.637837 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:16:54.637846 | orchestrator | 2026-02-02 06:16:54.637856 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:16:54.637865 | orchestrator | Monday 02 February 2026 06:16:53 +0000 (0:00:00.926) 0:43:21.158 ******* 2026-02-02 06:16:54.637875 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 06:16:54.637884 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 06:16:54.637903 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 06:17:44.824469 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:17:44.824586 | orchestrator | 2026-02-02 06:17:44.824604 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:17:44.824617 | orchestrator | Monday 02 February 2026 06:16:54 +0000 (0:00:01.048) 0:43:22.206 ******* 2026-02-02 06:17:44.824628 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 06:17:44.824640 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 06:17:44.824651 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 06:17:44.824662 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:17:44.824673 | orchestrator | 2026-02-02 06:17:44.824684 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:17:44.824695 | orchestrator | Monday 02 February 2026 06:16:56 +0000 (0:00:01.532) 0:43:23.739 ******* 2026-02-02 06:17:44.824706 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 06:17:44.824717 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 06:17:44.824728 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 06:17:44.824738 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:17:44.824749 | orchestrator | 2026-02-02 06:17:44.824760 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:17:44.824771 | orchestrator | Monday 02 February 2026 06:16:57 +0000 (0:00:01.425) 0:43:25.164 ******* 2026-02-02 06:17:44.824782 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:17:44.824794 | orchestrator | 2026-02-02 06:17:44.824805 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:17:44.824816 | orchestrator | Monday 02 February 2026 06:16:58 +0000 (0:00:00.896) 0:43:26.060 ******* 2026-02-02 06:17:44.824827 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-02 06:17:44.824838 | orchestrator | 2026-02-02 06:17:44.824849 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 06:17:44.824860 | orchestrator | Monday 02 February 2026 06:16:59 +0000 (0:00:01.049) 0:43:27.110 ******* 2026-02-02 06:17:44.824871 | orchestrator | changed: [testbed-node-4] 2026-02-02 06:17:44.824882 | orchestrator | 2026-02-02 06:17:44.824893 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-02 06:17:44.824904 | orchestrator | Monday 02 February 2026 06:17:00 +0000 (0:00:01.403) 0:43:28.514 ******* 2026-02-02 06:17:44.824915 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:17:44.824926 | orchestrator | 2026-02-02 06:17:44.824937 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-02 06:17:44.824948 | orchestrator | Monday 02 February 2026 06:17:01 +0000 (0:00:00.857) 0:43:29.371 ******* 2026-02-02 06:17:44.824959 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:17:44.824971 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:17:44.824981 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:17:44.824992 | orchestrator | 2026-02-02 06:17:44.825003 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-02 06:17:44.825016 | orchestrator | Monday 02 February 2026 06:17:03 +0000 (0:00:01.367) 0:43:30.738 ******* 2026-02-02 06:17:44.825029 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-02-02 06:17:44.825041 | orchestrator | 2026-02-02 06:17:44.825054 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-02 06:17:44.825066 | orchestrator | Monday 02 February 2026 06:17:04 +0000 (0:00:01.169) 0:43:31.908 ******* 2026-02-02 06:17:44.825078 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:17:44.825091 | orchestrator | 2026-02-02 06:17:44.825104 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-02 06:17:44.825117 | orchestrator | Monday 02 February 2026 06:17:05 +0000 (0:00:01.131) 0:43:33.039 ******* 2026-02-02 06:17:44.825154 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:17:44.825199 | orchestrator | 2026-02-02 06:17:44.825211 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-02 06:17:44.825238 | orchestrator | Monday 02 February 2026 06:17:06 +0000 (0:00:01.160) 0:43:34.200 ******* 2026-02-02 06:17:44.825252 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:17:44.825266 | orchestrator | 2026-02-02 06:17:44.825279 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-02 06:17:44.825292 | orchestrator | Monday 02 February 2026 06:17:08 +0000 (0:00:01.458) 0:43:35.658 ******* 2026-02-02 06:17:44.825305 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:17:44.825318 | orchestrator | 2026-02-02 06:17:44.825331 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-02 06:17:44.825343 | orchestrator | Monday 02 February 2026 06:17:09 +0000 (0:00:01.153) 0:43:36.812 ******* 2026-02-02 06:17:44.825354 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-02 06:17:44.825366 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-02 06:17:44.825376 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-02 06:17:44.825387 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-02 06:17:44.825398 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-02 06:17:44.825408 | orchestrator | 2026-02-02 06:17:44.825419 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-02 06:17:44.825430 | orchestrator | Monday 02 February 2026 06:17:12 +0000 (0:00:03.577) 0:43:40.389 ******* 2026-02-02 06:17:44.825441 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:17:44.825451 | orchestrator | 2026-02-02 06:17:44.825462 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-02 06:17:44.825473 | orchestrator | Monday 02 February 2026 06:17:13 +0000 (0:00:00.772) 0:43:41.162 ******* 2026-02-02 06:17:44.825501 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-02-02 06:17:44.825513 | orchestrator | 2026-02-02 06:17:44.825524 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-02 06:17:44.825535 | orchestrator | Monday 02 February 2026 06:17:14 +0000 (0:00:01.221) 0:43:42.383 ******* 2026-02-02 06:17:44.825546 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-02 06:17:44.825556 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-02 06:17:44.825567 | orchestrator | 2026-02-02 06:17:44.825578 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-02 06:17:44.825589 | orchestrator | Monday 02 February 2026 06:17:16 +0000 (0:00:01.790) 0:43:44.173 ******* 2026-02-02 06:17:44.825600 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:17:44.825611 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-02 06:17:44.825622 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 06:17:44.825632 | orchestrator | 2026-02-02 06:17:44.825643 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:17:44.825654 | orchestrator | Monday 02 February 2026 06:17:19 +0000 (0:00:03.241) 0:43:47.415 ******* 2026-02-02 06:17:44.825665 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-02 06:17:44.825676 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-02 06:17:44.825687 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:17:44.825698 | orchestrator | 2026-02-02 06:17:44.825709 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-02 06:17:44.825719 | orchestrator | Monday 02 February 2026 06:17:21 +0000 (0:00:01.642) 0:43:49.058 ******* 2026-02-02 06:17:44.825730 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:17:44.825741 | orchestrator | 2026-02-02 06:17:44.825752 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-02 06:17:44.825772 | orchestrator | Monday 02 February 2026 06:17:22 +0000 (0:00:00.854) 0:43:49.913 ******* 2026-02-02 06:17:44.825782 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:17:44.825793 | orchestrator | 2026-02-02 06:17:44.825804 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-02 06:17:44.825815 | orchestrator | Monday 02 February 2026 06:17:23 +0000 (0:00:00.770) 0:43:50.684 ******* 2026-02-02 06:17:44.825826 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:17:44.825837 | orchestrator | 2026-02-02 06:17:44.825848 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-02 06:17:44.825859 | orchestrator | Monday 02 February 2026 06:17:23 +0000 (0:00:00.849) 0:43:51.533 ******* 2026-02-02 06:17:44.825870 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-02-02 06:17:44.825881 | orchestrator | 2026-02-02 06:17:44.825891 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-02 06:17:44.825902 | orchestrator | Monday 02 February 2026 06:17:25 +0000 (0:00:01.082) 0:43:52.616 ******* 2026-02-02 06:17:44.825913 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:17:44.825924 | orchestrator | 2026-02-02 06:17:44.825935 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-02 06:17:44.825945 | orchestrator | Monday 02 February 2026 06:17:26 +0000 (0:00:01.480) 0:43:54.097 ******* 2026-02-02 06:17:44.825956 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:17:44.825967 | orchestrator | 2026-02-02 06:17:44.825978 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-02 06:17:44.825993 | orchestrator | Monday 02 February 2026 06:17:29 +0000 (0:00:03.461) 0:43:57.559 ******* 2026-02-02 06:17:44.826011 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-02-02 06:17:44.826103 | orchestrator | 2026-02-02 06:17:44.826121 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-02 06:17:44.826153 | orchestrator | Monday 02 February 2026 06:17:31 +0000 (0:00:01.362) 0:43:58.921 ******* 2026-02-02 06:17:44.826204 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:17:44.826224 | orchestrator | 2026-02-02 06:17:44.826243 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-02 06:17:44.826271 | orchestrator | Monday 02 February 2026 06:17:33 +0000 (0:00:02.197) 0:44:01.118 ******* 2026-02-02 06:17:44.826291 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:17:44.826311 | orchestrator | 2026-02-02 06:17:44.826330 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-02 06:17:44.826349 | orchestrator | Monday 02 February 2026 06:17:35 +0000 (0:00:01.888) 0:44:03.007 ******* 2026-02-02 06:17:44.826368 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:17:44.826380 | orchestrator | 2026-02-02 06:17:44.826391 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-02 06:17:44.826402 | orchestrator | Monday 02 February 2026 06:17:37 +0000 (0:00:02.292) 0:44:05.299 ******* 2026-02-02 06:17:44.826412 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:17:44.826423 | orchestrator | 2026-02-02 06:17:44.826434 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-02 06:17:44.826445 | orchestrator | Monday 02 February 2026 06:17:38 +0000 (0:00:01.125) 0:44:06.425 ******* 2026-02-02 06:17:44.826455 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:17:44.826466 | orchestrator | 2026-02-02 06:17:44.826477 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-02 06:17:44.826488 | orchestrator | Monday 02 February 2026 06:17:39 +0000 (0:00:01.134) 0:44:07.560 ******* 2026-02-02 06:17:44.826498 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-02 06:17:44.826509 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-02 06:17:44.826520 | orchestrator | 2026-02-02 06:17:44.826530 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-02 06:17:44.826541 | orchestrator | Monday 02 February 2026 06:17:41 +0000 (0:00:01.885) 0:44:09.445 ******* 2026-02-02 06:17:44.826551 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-02 06:17:44.826572 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-02 06:17:44.826583 | orchestrator | 2026-02-02 06:17:44.826593 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-02 06:17:44.826614 | orchestrator | Monday 02 February 2026 06:17:44 +0000 (0:00:02.949) 0:44:12.395 ******* 2026-02-02 06:18:36.052533 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-02 06:18:36.052671 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-02 06:18:36.052699 | orchestrator | 2026-02-02 06:18:36.052720 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-02 06:18:36.052740 | orchestrator | Monday 02 February 2026 06:17:49 +0000 (0:00:04.257) 0:44:16.652 ******* 2026-02-02 06:18:36.052758 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:18:36.052777 | orchestrator | 2026-02-02 06:18:36.052797 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-02 06:18:36.052812 | orchestrator | Monday 02 February 2026 06:17:49 +0000 (0:00:00.915) 0:44:17.568 ******* 2026-02-02 06:18:36.052823 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:18:36.052840 | orchestrator | 2026-02-02 06:18:36.052857 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-02 06:18:36.052874 | orchestrator | Monday 02 February 2026 06:17:50 +0000 (0:00:00.862) 0:44:18.431 ******* 2026-02-02 06:18:36.052890 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:18:36.052908 | orchestrator | 2026-02-02 06:18:36.052924 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-02 06:18:36.052941 | orchestrator | Monday 02 February 2026 06:17:51 +0000 (0:00:01.065) 0:44:19.497 ******* 2026-02-02 06:18:36.052958 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:18:36.052975 | orchestrator | 2026-02-02 06:18:36.052992 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-02 06:18:36.053011 | orchestrator | Monday 02 February 2026 06:17:52 +0000 (0:00:00.755) 0:44:20.252 ******* 2026-02-02 06:18:36.053028 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:18:36.053045 | orchestrator | 2026-02-02 06:18:36.053061 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-02 06:18:36.053078 | orchestrator | Monday 02 February 2026 06:17:53 +0000 (0:00:00.775) 0:44:21.027 ******* 2026-02-02 06:18:36.053091 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-02 06:18:36.053104 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-02 06:18:36.053116 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-02 06:18:36.053129 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-02-02 06:18:36.053147 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:18:36.053189 | orchestrator | 2026-02-02 06:18:36.053206 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 06:18:36.053224 | orchestrator | Monday 02 February 2026 06:18:07 +0000 (0:00:13.916) 0:44:34.944 ******* 2026-02-02 06:18:36.053241 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:18:36.053258 | orchestrator | 2026-02-02 06:18:36.053276 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-02 06:18:36.053291 | orchestrator | Monday 02 February 2026 06:18:08 +0000 (0:00:00.791) 0:44:35.736 ******* 2026-02-02 06:18:36.053308 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:18:36.053324 | orchestrator | 2026-02-02 06:18:36.053342 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-02 06:18:36.053359 | orchestrator | Monday 02 February 2026 06:18:08 +0000 (0:00:00.806) 0:44:36.543 ******* 2026-02-02 06:18:36.053376 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:18:36.053393 | orchestrator | 2026-02-02 06:18:36.053411 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-02 06:18:36.053458 | orchestrator | Monday 02 February 2026 06:18:09 +0000 (0:00:00.767) 0:44:37.311 ******* 2026-02-02 06:18:36.053474 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:18:36.053484 | orchestrator | 2026-02-02 06:18:36.053493 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-02 06:18:36.053517 | orchestrator | Monday 02 February 2026 06:18:10 +0000 (0:00:00.789) 0:44:38.100 ******* 2026-02-02 06:18:36.053527 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:18:36.053537 | orchestrator | 2026-02-02 06:18:36.053546 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-02 06:18:36.053556 | orchestrator | Monday 02 February 2026 06:18:11 +0000 (0:00:00.764) 0:44:38.865 ******* 2026-02-02 06:18:36.053568 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:18:36.053584 | orchestrator | 2026-02-02 06:18:36.053600 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-02 06:18:36.053617 | orchestrator | Monday 02 February 2026 06:18:12 +0000 (0:00:00.795) 0:44:39.661 ******* 2026-02-02 06:18:36.053632 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:18:36.053674 | orchestrator | 2026-02-02 06:18:36.053720 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-02 06:18:36.053730 | orchestrator | 2026-02-02 06:18:36.053740 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 06:18:36.053749 | orchestrator | Monday 02 February 2026 06:18:13 +0000 (0:00:01.028) 0:44:40.689 ******* 2026-02-02 06:18:36.053758 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-02 06:18:36.053768 | orchestrator | 2026-02-02 06:18:36.053777 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 06:18:36.053787 | orchestrator | Monday 02 February 2026 06:18:14 +0000 (0:00:01.349) 0:44:42.039 ******* 2026-02-02 06:18:36.053797 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:36.053806 | orchestrator | 2026-02-02 06:18:36.053817 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 06:18:36.053826 | orchestrator | Monday 02 February 2026 06:18:15 +0000 (0:00:01.486) 0:44:43.525 ******* 2026-02-02 06:18:36.053836 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:36.053845 | orchestrator | 2026-02-02 06:18:36.053855 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:18:36.053865 | orchestrator | Monday 02 February 2026 06:18:17 +0000 (0:00:01.184) 0:44:44.709 ******* 2026-02-02 06:18:36.053894 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:36.053905 | orchestrator | 2026-02-02 06:18:36.053914 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:18:36.053924 | orchestrator | Monday 02 February 2026 06:18:18 +0000 (0:00:01.438) 0:44:46.147 ******* 2026-02-02 06:18:36.053933 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:36.053943 | orchestrator | 2026-02-02 06:18:36.053952 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 06:18:36.053962 | orchestrator | Monday 02 February 2026 06:18:19 +0000 (0:00:01.142) 0:44:47.290 ******* 2026-02-02 06:18:36.053971 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:36.053981 | orchestrator | 2026-02-02 06:18:36.053990 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 06:18:36.054000 | orchestrator | Monday 02 February 2026 06:18:20 +0000 (0:00:01.189) 0:44:48.480 ******* 2026-02-02 06:18:36.054009 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:36.054085 | orchestrator | 2026-02-02 06:18:36.054095 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 06:18:36.054105 | orchestrator | Monday 02 February 2026 06:18:22 +0000 (0:00:01.158) 0:44:49.638 ******* 2026-02-02 06:18:36.054115 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:18:36.054124 | orchestrator | 2026-02-02 06:18:36.054134 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 06:18:36.054143 | orchestrator | Monday 02 February 2026 06:18:23 +0000 (0:00:01.211) 0:44:50.850 ******* 2026-02-02 06:18:36.054153 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:36.054222 | orchestrator | 2026-02-02 06:18:36.054233 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 06:18:36.054244 | orchestrator | Monday 02 February 2026 06:18:24 +0000 (0:00:01.150) 0:44:52.001 ******* 2026-02-02 06:18:36.054254 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:18:36.054263 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:18:36.054273 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:18:36.054282 | orchestrator | 2026-02-02 06:18:36.054292 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 06:18:36.054302 | orchestrator | Monday 02 February 2026 06:18:26 +0000 (0:00:02.066) 0:44:54.067 ******* 2026-02-02 06:18:36.054311 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:36.054321 | orchestrator | 2026-02-02 06:18:36.054331 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 06:18:36.054341 | orchestrator | Monday 02 February 2026 06:18:27 +0000 (0:00:01.228) 0:44:55.296 ******* 2026-02-02 06:18:36.054350 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:18:36.054360 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:18:36.054369 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:18:36.054379 | orchestrator | 2026-02-02 06:18:36.054389 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 06:18:36.054398 | orchestrator | Monday 02 February 2026 06:18:31 +0000 (0:00:03.616) 0:44:58.913 ******* 2026-02-02 06:18:36.054408 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-02 06:18:36.054418 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-02 06:18:36.054504 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-02 06:18:36.054515 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:18:36.054525 | orchestrator | 2026-02-02 06:18:36.054535 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 06:18:36.054545 | orchestrator | Monday 02 February 2026 06:18:33 +0000 (0:00:01.849) 0:45:00.762 ******* 2026-02-02 06:18:36.054565 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 06:18:36.054578 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 06:18:36.054588 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 06:18:36.054598 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:18:36.054608 | orchestrator | 2026-02-02 06:18:36.054618 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 06:18:36.054628 | orchestrator | Monday 02 February 2026 06:18:34 +0000 (0:00:01.703) 0:45:02.466 ******* 2026-02-02 06:18:36.054640 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:18:36.054665 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:18:54.660603 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:18:54.660726 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:18:54.660751 | orchestrator | 2026-02-02 06:18:54.660769 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 06:18:54.660786 | orchestrator | Monday 02 February 2026 06:18:36 +0000 (0:00:01.151) 0:45:03.618 ******* 2026-02-02 06:18:54.660797 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 06:18:28.824688', 'end': '2026-02-02 06:18:28.891094', 'delta': '0:00:00.066406', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 06:18:54.660809 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'c530967d0aad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 06:18:29.445482', 'end': '2026-02-02 06:18:29.499142', 'delta': '0:00:00.053660', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c530967d0aad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 06:18:54.660833 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'a68c96a70534', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 06:18:30.041797', 'end': '2026-02-02 06:18:30.093438', 'delta': '0:00:00.051641', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a68c96a70534'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 06:18:54.660843 | orchestrator | 2026-02-02 06:18:54.660852 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 06:18:54.660886 | orchestrator | Monday 02 February 2026 06:18:37 +0000 (0:00:01.203) 0:45:04.821 ******* 2026-02-02 06:18:54.660903 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:54.660921 | orchestrator | 2026-02-02 06:18:54.660938 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 06:18:54.660956 | orchestrator | Monday 02 February 2026 06:18:38 +0000 (0:00:01.215) 0:45:06.037 ******* 2026-02-02 06:18:54.660973 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:18:54.661028 | orchestrator | 2026-02-02 06:18:54.661046 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 06:18:54.661063 | orchestrator | Monday 02 February 2026 06:18:39 +0000 (0:00:01.232) 0:45:07.270 ******* 2026-02-02 06:18:54.661081 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:54.661098 | orchestrator | 2026-02-02 06:18:54.661115 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 06:18:54.661133 | orchestrator | Monday 02 February 2026 06:18:40 +0000 (0:00:01.192) 0:45:08.463 ******* 2026-02-02 06:18:54.661151 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:18:54.661202 | orchestrator | 2026-02-02 06:18:54.661219 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:18:54.661235 | orchestrator | Monday 02 February 2026 06:18:42 +0000 (0:00:01.938) 0:45:10.402 ******* 2026-02-02 06:18:54.661252 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:54.661268 | orchestrator | 2026-02-02 06:18:54.661285 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 06:18:54.661301 | orchestrator | Monday 02 February 2026 06:18:43 +0000 (0:00:01.157) 0:45:11.560 ******* 2026-02-02 06:18:54.661338 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:18:54.661356 | orchestrator | 2026-02-02 06:18:54.661372 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 06:18:54.661389 | orchestrator | Monday 02 February 2026 06:18:45 +0000 (0:00:01.112) 0:45:12.672 ******* 2026-02-02 06:18:54.661405 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:18:54.661421 | orchestrator | 2026-02-02 06:18:54.661437 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:18:54.661454 | orchestrator | Monday 02 February 2026 06:18:46 +0000 (0:00:01.231) 0:45:13.903 ******* 2026-02-02 06:18:54.661470 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:18:54.661487 | orchestrator | 2026-02-02 06:18:54.661503 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 06:18:54.661519 | orchestrator | Monday 02 February 2026 06:18:47 +0000 (0:00:01.138) 0:45:15.042 ******* 2026-02-02 06:18:54.661535 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:18:54.661551 | orchestrator | 2026-02-02 06:18:54.661567 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 06:18:54.661583 | orchestrator | Monday 02 February 2026 06:18:48 +0000 (0:00:01.090) 0:45:16.133 ******* 2026-02-02 06:18:54.661599 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:54.661614 | orchestrator | 2026-02-02 06:18:54.661630 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 06:18:54.661646 | orchestrator | Monday 02 February 2026 06:18:49 +0000 (0:00:01.241) 0:45:17.375 ******* 2026-02-02 06:18:54.661662 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:18:54.661678 | orchestrator | 2026-02-02 06:18:54.661694 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 06:18:54.661710 | orchestrator | Monday 02 February 2026 06:18:50 +0000 (0:00:01.129) 0:45:18.504 ******* 2026-02-02 06:18:54.661726 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:54.661742 | orchestrator | 2026-02-02 06:18:54.661757 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 06:18:54.661773 | orchestrator | Monday 02 February 2026 06:18:52 +0000 (0:00:01.238) 0:45:19.743 ******* 2026-02-02 06:18:54.661789 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:18:54.661805 | orchestrator | 2026-02-02 06:18:54.661821 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 06:18:54.661839 | orchestrator | Monday 02 February 2026 06:18:53 +0000 (0:00:01.105) 0:45:20.848 ******* 2026-02-02 06:18:54.661855 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:18:54.661871 | orchestrator | 2026-02-02 06:18:54.661887 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 06:18:54.661903 | orchestrator | Monday 02 February 2026 06:18:54 +0000 (0:00:01.138) 0:45:21.987 ******* 2026-02-02 06:18:54.661920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:18:54.661955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6', 'dm-uuid-LVM-o4NjfQidgd0d8Dt2ERSF2CVjMcc1iNdF2FL70XUBfeOz8qjNKOcDK13w6fcJ9Hta'], 'uuids': ['7d002011-c2d2-4478-8516-4cfbbdeaec0b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta']}})  2026-02-02 06:18:54.661975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359', 'scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e969e129', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:18:54.662003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qjdzC2-uhmD-TpwQ-o3eu-AERk-xIpn-IuLEqz', 'scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40', 'scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f']}})  2026-02-02 06:18:55.787001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:18:55.787229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:18:55.787259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 06:18:55.787329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:18:55.787371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs', 'dm-uuid-CRYPT-LUKS2-756889fb99344894803ed86e669bebbd-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:18:55.787393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:18:55.787413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f', 'dm-uuid-LVM-oyVS0lpzZeiZxxmfRvad67kbexmRBG5IWJAtRWtNBygZ9yUEjcaaQoSOl1TBvsQs'], 'uuids': ['756889fb-9934-4894-803e-d86e669bebbd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs']}})  2026-02-02 06:18:55.787452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-etyEN7-O4pu-QliJ-NKxv-0HLx-jIcx-JGZ0d7', 'scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b', 'scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6']}})  2026-02-02 06:18:55.787465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:18:55.787486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2a7e3dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:18:55.787514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:18:55.787527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:18:55.787548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta', 'dm-uuid-CRYPT-LUKS2-7d002011c2d2447885164cfbbdeaec0b-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:18:56.000728 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:18:56.000824 | orchestrator | 2026-02-02 06:18:56.000839 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 06:18:56.000852 | orchestrator | Monday 02 February 2026 06:18:55 +0000 (0:00:01.372) 0:45:23.360 ******* 2026-02-02 06:18:56.000866 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:18:56.000904 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6', 'dm-uuid-LVM-o4NjfQidgd0d8Dt2ERSF2CVjMcc1iNdF2FL70XUBfeOz8qjNKOcDK13w6fcJ9Hta'], 'uuids': ['7d002011-c2d2-4478-8516-4cfbbdeaec0b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:18:56.000933 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359', 'scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e969e129', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:18:56.000946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qjdzC2-uhmD-TpwQ-o3eu-AERk-xIpn-IuLEqz', 'scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40', 'scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:18:56.000979 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:18:56.000992 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:18:56.001011 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:18:56.001028 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:18:56.001040 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs', 'dm-uuid-CRYPT-LUKS2-756889fb99344894803ed86e669bebbd-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:18:56.001052 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:18:56.001070 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f', 'dm-uuid-LVM-oyVS0lpzZeiZxxmfRvad67kbexmRBG5IWJAtRWtNBygZ9yUEjcaaQoSOl1TBvsQs'], 'uuids': ['756889fb-9934-4894-803e-d86e669bebbd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:19:09.263958 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-etyEN7-O4pu-QliJ-NKxv-0HLx-jIcx-JGZ0d7', 'scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b', 'scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:19:09.264142 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:19:09.264234 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2a7e3dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:19:09.264289 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:19:09.264324 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:19:09.264345 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta', 'dm-uuid-CRYPT-LUKS2-7d002011c2d2447885164cfbbdeaec0b-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:19:09.264375 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:09.264398 | orchestrator | 2026-02-02 06:19:09.264420 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 06:19:09.264439 | orchestrator | Monday 02 February 2026 06:18:57 +0000 (0:00:01.407) 0:45:24.768 ******* 2026-02-02 06:19:09.264458 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:09.264478 | orchestrator | 2026-02-02 06:19:09.264496 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 06:19:09.264516 | orchestrator | Monday 02 February 2026 06:18:58 +0000 (0:00:01.530) 0:45:26.298 ******* 2026-02-02 06:19:09.264534 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:09.264552 | orchestrator | 2026-02-02 06:19:09.264570 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:19:09.264588 | orchestrator | Monday 02 February 2026 06:18:59 +0000 (0:00:01.117) 0:45:27.416 ******* 2026-02-02 06:19:09.264606 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:09.264625 | orchestrator | 2026-02-02 06:19:09.264643 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:19:09.264663 | orchestrator | Monday 02 February 2026 06:19:01 +0000 (0:00:01.477) 0:45:28.893 ******* 2026-02-02 06:19:09.264682 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:09.264700 | orchestrator | 2026-02-02 06:19:09.264718 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:19:09.264735 | orchestrator | Monday 02 February 2026 06:19:02 +0000 (0:00:01.104) 0:45:29.997 ******* 2026-02-02 06:19:09.264753 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:09.264771 | orchestrator | 2026-02-02 06:19:09.264790 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:19:09.264810 | orchestrator | Monday 02 February 2026 06:19:03 +0000 (0:00:01.229) 0:45:31.227 ******* 2026-02-02 06:19:09.264829 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:09.264848 | orchestrator | 2026-02-02 06:19:09.264865 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 06:19:09.264884 | orchestrator | Monday 02 February 2026 06:19:04 +0000 (0:00:01.146) 0:45:32.374 ******* 2026-02-02 06:19:09.264903 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-02 06:19:09.264923 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-02 06:19:09.264953 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-02 06:19:09.264972 | orchestrator | 2026-02-02 06:19:09.264990 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 06:19:09.265008 | orchestrator | Monday 02 February 2026 06:19:06 +0000 (0:00:02.150) 0:45:34.524 ******* 2026-02-02 06:19:09.265027 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-02 06:19:09.265045 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-02 06:19:09.265063 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-02 06:19:09.265080 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:09.265098 | orchestrator | 2026-02-02 06:19:09.265117 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 06:19:09.265135 | orchestrator | Monday 02 February 2026 06:19:08 +0000 (0:00:01.141) 0:45:35.667 ******* 2026-02-02 06:19:09.265216 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-02 06:19:09.265243 | orchestrator | 2026-02-02 06:19:09.265280 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:19:53.016879 | orchestrator | Monday 02 February 2026 06:19:09 +0000 (0:00:01.161) 0:45:36.828 ******* 2026-02-02 06:19:53.016995 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:53.017019 | orchestrator | 2026-02-02 06:19:53.017041 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:19:53.017053 | orchestrator | Monday 02 February 2026 06:19:10 +0000 (0:00:01.146) 0:45:37.975 ******* 2026-02-02 06:19:53.017070 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:53.017088 | orchestrator | 2026-02-02 06:19:53.017108 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:19:53.017127 | orchestrator | Monday 02 February 2026 06:19:11 +0000 (0:00:01.115) 0:45:39.091 ******* 2026-02-02 06:19:53.017146 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:53.017226 | orchestrator | 2026-02-02 06:19:53.017239 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:19:53.017250 | orchestrator | Monday 02 February 2026 06:19:12 +0000 (0:00:01.130) 0:45:40.221 ******* 2026-02-02 06:19:53.017261 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:53.017273 | orchestrator | 2026-02-02 06:19:53.017285 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:19:53.017296 | orchestrator | Monday 02 February 2026 06:19:13 +0000 (0:00:01.216) 0:45:41.438 ******* 2026-02-02 06:19:53.017307 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:19:53.017318 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:19:53.017329 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:19:53.017340 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:53.017351 | orchestrator | 2026-02-02 06:19:53.017363 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:19:53.017373 | orchestrator | Monday 02 February 2026 06:19:15 +0000 (0:00:01.401) 0:45:42.839 ******* 2026-02-02 06:19:53.017384 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:19:53.017399 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:19:53.017418 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:19:53.017439 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:53.017453 | orchestrator | 2026-02-02 06:19:53.017466 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:19:53.017482 | orchestrator | Monday 02 February 2026 06:19:16 +0000 (0:00:01.473) 0:45:44.313 ******* 2026-02-02 06:19:53.017500 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:19:53.017538 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:19:53.017558 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:19:53.017630 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:53.017653 | orchestrator | 2026-02-02 06:19:53.017673 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:19:53.017692 | orchestrator | Monday 02 February 2026 06:19:18 +0000 (0:00:01.441) 0:45:45.755 ******* 2026-02-02 06:19:53.017708 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:53.017721 | orchestrator | 2026-02-02 06:19:53.017734 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:19:53.017748 | orchestrator | Monday 02 February 2026 06:19:19 +0000 (0:00:01.177) 0:45:46.932 ******* 2026-02-02 06:19:53.017761 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-02 06:19:53.017774 | orchestrator | 2026-02-02 06:19:53.017786 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 06:19:53.017796 | orchestrator | Monday 02 February 2026 06:19:21 +0000 (0:00:01.666) 0:45:48.599 ******* 2026-02-02 06:19:53.017807 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:19:53.017818 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:19:53.017829 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:19:53.017839 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:19:53.017850 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:19:53.017865 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-02 06:19:53.017884 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:19:53.017903 | orchestrator | 2026-02-02 06:19:53.017921 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 06:19:53.017939 | orchestrator | Monday 02 February 2026 06:19:23 +0000 (0:00:02.275) 0:45:50.874 ******* 2026-02-02 06:19:53.017958 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:19:53.017977 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:19:53.017995 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:19:53.018009 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:19:53.018083 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:19:53.018095 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-02 06:19:53.018112 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:19:53.018131 | orchestrator | 2026-02-02 06:19:53.018206 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-02 06:19:53.018228 | orchestrator | Monday 02 February 2026 06:19:25 +0000 (0:00:02.288) 0:45:53.163 ******* 2026-02-02 06:19:53.018247 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:53.018266 | orchestrator | 2026-02-02 06:19:53.018287 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-02 06:19:53.018330 | orchestrator | Monday 02 February 2026 06:19:26 +0000 (0:00:01.168) 0:45:54.332 ******* 2026-02-02 06:19:53.018354 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:53.018373 | orchestrator | 2026-02-02 06:19:53.018388 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-02 06:19:53.018399 | orchestrator | Monday 02 February 2026 06:19:27 +0000 (0:00:00.770) 0:45:55.102 ******* 2026-02-02 06:19:53.018412 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:53.018431 | orchestrator | 2026-02-02 06:19:53.018451 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-02 06:19:53.018470 | orchestrator | Monday 02 February 2026 06:19:28 +0000 (0:00:00.883) 0:45:55.985 ******* 2026-02-02 06:19:53.018489 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-02 06:19:53.018524 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-02 06:19:53.018544 | orchestrator | 2026-02-02 06:19:53.018563 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 06:19:53.018582 | orchestrator | Monday 02 February 2026 06:19:33 +0000 (0:00:04.847) 0:46:00.833 ******* 2026-02-02 06:19:53.018601 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-02 06:19:53.018616 | orchestrator | 2026-02-02 06:19:53.018630 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 06:19:53.018648 | orchestrator | Monday 02 February 2026 06:19:34 +0000 (0:00:01.139) 0:46:01.972 ******* 2026-02-02 06:19:53.018668 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-02 06:19:53.018686 | orchestrator | 2026-02-02 06:19:53.018704 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 06:19:53.018723 | orchestrator | Monday 02 February 2026 06:19:35 +0000 (0:00:01.164) 0:46:03.137 ******* 2026-02-02 06:19:53.018741 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:53.018761 | orchestrator | 2026-02-02 06:19:53.018779 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 06:19:53.018798 | orchestrator | Monday 02 February 2026 06:19:36 +0000 (0:00:01.118) 0:46:04.255 ******* 2026-02-02 06:19:53.018815 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:53.018834 | orchestrator | 2026-02-02 06:19:53.018854 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 06:19:53.018872 | orchestrator | Monday 02 February 2026 06:19:38 +0000 (0:00:01.490) 0:46:05.746 ******* 2026-02-02 06:19:53.018886 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:53.018897 | orchestrator | 2026-02-02 06:19:53.018916 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 06:19:53.018927 | orchestrator | Monday 02 February 2026 06:19:39 +0000 (0:00:01.810) 0:46:07.556 ******* 2026-02-02 06:19:53.018938 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:53.018949 | orchestrator | 2026-02-02 06:19:53.018960 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 06:19:53.018974 | orchestrator | Monday 02 February 2026 06:19:41 +0000 (0:00:01.533) 0:46:09.090 ******* 2026-02-02 06:19:53.018993 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:53.019012 | orchestrator | 2026-02-02 06:19:53.019030 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 06:19:53.019050 | orchestrator | Monday 02 February 2026 06:19:42 +0000 (0:00:01.171) 0:46:10.261 ******* 2026-02-02 06:19:53.019061 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:53.019073 | orchestrator | 2026-02-02 06:19:53.019091 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 06:19:53.019110 | orchestrator | Monday 02 February 2026 06:19:43 +0000 (0:00:01.112) 0:46:11.374 ******* 2026-02-02 06:19:53.019127 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:53.019144 | orchestrator | 2026-02-02 06:19:53.019181 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 06:19:53.019197 | orchestrator | Monday 02 February 2026 06:19:45 +0000 (0:00:01.231) 0:46:12.605 ******* 2026-02-02 06:19:53.019208 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:53.019218 | orchestrator | 2026-02-02 06:19:53.019229 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 06:19:53.019241 | orchestrator | Monday 02 February 2026 06:19:46 +0000 (0:00:01.523) 0:46:14.129 ******* 2026-02-02 06:19:53.019260 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:53.019280 | orchestrator | 2026-02-02 06:19:53.019298 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 06:19:53.019318 | orchestrator | Monday 02 February 2026 06:19:48 +0000 (0:00:01.543) 0:46:15.673 ******* 2026-02-02 06:19:53.019335 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:53.019353 | orchestrator | 2026-02-02 06:19:53.019372 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 06:19:53.019404 | orchestrator | Monday 02 February 2026 06:19:48 +0000 (0:00:00.784) 0:46:16.457 ******* 2026-02-02 06:19:53.019423 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:53.019443 | orchestrator | 2026-02-02 06:19:53.019455 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 06:19:53.019470 | orchestrator | Monday 02 February 2026 06:19:49 +0000 (0:00:00.893) 0:46:17.351 ******* 2026-02-02 06:19:53.019487 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:53.019507 | orchestrator | 2026-02-02 06:19:53.019525 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 06:19:53.019544 | orchestrator | Monday 02 February 2026 06:19:50 +0000 (0:00:00.850) 0:46:18.201 ******* 2026-02-02 06:19:53.019563 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:53.019582 | orchestrator | 2026-02-02 06:19:53.019601 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 06:19:53.019620 | orchestrator | Monday 02 February 2026 06:19:51 +0000 (0:00:00.822) 0:46:19.024 ******* 2026-02-02 06:19:53.019637 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:19:53.019655 | orchestrator | 2026-02-02 06:19:53.019675 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 06:19:53.019694 | orchestrator | Monday 02 February 2026 06:19:52 +0000 (0:00:00.795) 0:46:19.819 ******* 2026-02-02 06:19:53.019706 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:19:53.019717 | orchestrator | 2026-02-02 06:19:53.019739 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 06:20:34.402709 | orchestrator | Monday 02 February 2026 06:19:53 +0000 (0:00:00.765) 0:46:20.585 ******* 2026-02-02 06:20:34.402806 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.402819 | orchestrator | 2026-02-02 06:20:34.402829 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 06:20:34.402836 | orchestrator | Monday 02 February 2026 06:19:53 +0000 (0:00:00.752) 0:46:21.338 ******* 2026-02-02 06:20:34.402844 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.402851 | orchestrator | 2026-02-02 06:20:34.402859 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 06:20:34.402866 | orchestrator | Monday 02 February 2026 06:19:54 +0000 (0:00:01.085) 0:46:22.423 ******* 2026-02-02 06:20:34.402873 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:20:34.402881 | orchestrator | 2026-02-02 06:20:34.402889 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 06:20:34.402896 | orchestrator | Monday 02 February 2026 06:19:55 +0000 (0:00:00.885) 0:46:23.308 ******* 2026-02-02 06:20:34.402903 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:20:34.402910 | orchestrator | 2026-02-02 06:20:34.402918 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 06:20:34.402925 | orchestrator | Monday 02 February 2026 06:19:56 +0000 (0:00:00.861) 0:46:24.170 ******* 2026-02-02 06:20:34.402932 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.402939 | orchestrator | 2026-02-02 06:20:34.402947 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 06:20:34.402955 | orchestrator | Monday 02 February 2026 06:19:57 +0000 (0:00:00.784) 0:46:24.954 ******* 2026-02-02 06:20:34.402967 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.402978 | orchestrator | 2026-02-02 06:20:34.402989 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 06:20:34.402999 | orchestrator | Monday 02 February 2026 06:19:58 +0000 (0:00:00.840) 0:46:25.794 ******* 2026-02-02 06:20:34.403010 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403022 | orchestrator | 2026-02-02 06:20:34.403033 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 06:20:34.403043 | orchestrator | Monday 02 February 2026 06:19:59 +0000 (0:00:00.816) 0:46:26.611 ******* 2026-02-02 06:20:34.403054 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403068 | orchestrator | 2026-02-02 06:20:34.403080 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 06:20:34.403115 | orchestrator | Monday 02 February 2026 06:19:59 +0000 (0:00:00.751) 0:46:27.363 ******* 2026-02-02 06:20:34.403141 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403197 | orchestrator | 2026-02-02 06:20:34.403209 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 06:20:34.403220 | orchestrator | Monday 02 February 2026 06:20:00 +0000 (0:00:00.787) 0:46:28.151 ******* 2026-02-02 06:20:34.403230 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403240 | orchestrator | 2026-02-02 06:20:34.403251 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 06:20:34.403262 | orchestrator | Monday 02 February 2026 06:20:01 +0000 (0:00:00.769) 0:46:28.921 ******* 2026-02-02 06:20:34.403274 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403285 | orchestrator | 2026-02-02 06:20:34.403296 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 06:20:34.403309 | orchestrator | Monday 02 February 2026 06:20:02 +0000 (0:00:00.789) 0:46:29.710 ******* 2026-02-02 06:20:34.403321 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403332 | orchestrator | 2026-02-02 06:20:34.403344 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 06:20:34.403357 | orchestrator | Monday 02 February 2026 06:20:02 +0000 (0:00:00.744) 0:46:30.455 ******* 2026-02-02 06:20:34.403370 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403383 | orchestrator | 2026-02-02 06:20:34.403397 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 06:20:34.403410 | orchestrator | Monday 02 February 2026 06:20:03 +0000 (0:00:00.755) 0:46:31.210 ******* 2026-02-02 06:20:34.403422 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403435 | orchestrator | 2026-02-02 06:20:34.403449 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 06:20:34.403463 | orchestrator | Monday 02 February 2026 06:20:04 +0000 (0:00:00.911) 0:46:32.121 ******* 2026-02-02 06:20:34.403476 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403490 | orchestrator | 2026-02-02 06:20:34.403503 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 06:20:34.403518 | orchestrator | Monday 02 February 2026 06:20:05 +0000 (0:00:00.770) 0:46:32.892 ******* 2026-02-02 06:20:34.403532 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403546 | orchestrator | 2026-02-02 06:20:34.403559 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 06:20:34.403572 | orchestrator | Monday 02 February 2026 06:20:06 +0000 (0:00:00.778) 0:46:33.671 ******* 2026-02-02 06:20:34.403585 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:20:34.403600 | orchestrator | 2026-02-02 06:20:34.403614 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 06:20:34.403647 | orchestrator | Monday 02 February 2026 06:20:07 +0000 (0:00:01.562) 0:46:35.233 ******* 2026-02-02 06:20:34.403660 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:20:34.403672 | orchestrator | 2026-02-02 06:20:34.403684 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 06:20:34.403691 | orchestrator | Monday 02 February 2026 06:20:09 +0000 (0:00:01.877) 0:46:37.111 ******* 2026-02-02 06:20:34.403698 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-02 06:20:34.403707 | orchestrator | 2026-02-02 06:20:34.403714 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 06:20:34.403722 | orchestrator | Monday 02 February 2026 06:20:10 +0000 (0:00:01.168) 0:46:38.279 ******* 2026-02-02 06:20:34.403729 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403736 | orchestrator | 2026-02-02 06:20:34.403743 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 06:20:34.403767 | orchestrator | Monday 02 February 2026 06:20:11 +0000 (0:00:01.201) 0:46:39.480 ******* 2026-02-02 06:20:34.403775 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403783 | orchestrator | 2026-02-02 06:20:34.403801 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 06:20:34.403809 | orchestrator | Monday 02 February 2026 06:20:13 +0000 (0:00:01.125) 0:46:40.606 ******* 2026-02-02 06:20:34.403816 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 06:20:34.403823 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 06:20:34.403831 | orchestrator | 2026-02-02 06:20:34.403838 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 06:20:34.403845 | orchestrator | Monday 02 February 2026 06:20:14 +0000 (0:00:01.849) 0:46:42.456 ******* 2026-02-02 06:20:34.403852 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:20:34.403859 | orchestrator | 2026-02-02 06:20:34.403867 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 06:20:34.403874 | orchestrator | Monday 02 February 2026 06:20:16 +0000 (0:00:01.549) 0:46:44.005 ******* 2026-02-02 06:20:34.403881 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403888 | orchestrator | 2026-02-02 06:20:34.403895 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 06:20:34.403948 | orchestrator | Monday 02 February 2026 06:20:17 +0000 (0:00:01.289) 0:46:45.294 ******* 2026-02-02 06:20:34.403956 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403963 | orchestrator | 2026-02-02 06:20:34.403970 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 06:20:34.403977 | orchestrator | Monday 02 February 2026 06:20:18 +0000 (0:00:00.875) 0:46:46.170 ******* 2026-02-02 06:20:34.403985 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.403992 | orchestrator | 2026-02-02 06:20:34.403999 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 06:20:34.404006 | orchestrator | Monday 02 February 2026 06:20:19 +0000 (0:00:00.831) 0:46:47.002 ******* 2026-02-02 06:20:34.404014 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-02 06:20:34.404021 | orchestrator | 2026-02-02 06:20:34.404028 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 06:20:34.404035 | orchestrator | Monday 02 February 2026 06:20:20 +0000 (0:00:01.211) 0:46:48.213 ******* 2026-02-02 06:20:34.404042 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:20:34.404050 | orchestrator | 2026-02-02 06:20:34.404065 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 06:20:34.404072 | orchestrator | Monday 02 February 2026 06:20:22 +0000 (0:00:01.732) 0:46:49.946 ******* 2026-02-02 06:20:34.404079 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 06:20:34.404086 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 06:20:34.404093 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 06:20:34.404101 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.404108 | orchestrator | 2026-02-02 06:20:34.404115 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 06:20:34.404122 | orchestrator | Monday 02 February 2026 06:20:23 +0000 (0:00:01.133) 0:46:51.080 ******* 2026-02-02 06:20:34.404129 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.404136 | orchestrator | 2026-02-02 06:20:34.404144 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 06:20:34.404173 | orchestrator | Monday 02 February 2026 06:20:24 +0000 (0:00:01.116) 0:46:52.196 ******* 2026-02-02 06:20:34.404183 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.404194 | orchestrator | 2026-02-02 06:20:34.404206 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 06:20:34.404218 | orchestrator | Monday 02 February 2026 06:20:25 +0000 (0:00:01.231) 0:46:53.428 ******* 2026-02-02 06:20:34.404244 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.404257 | orchestrator | 2026-02-02 06:20:34.404265 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 06:20:34.404279 | orchestrator | Monday 02 February 2026 06:20:26 +0000 (0:00:01.130) 0:46:54.558 ******* 2026-02-02 06:20:34.404286 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.404294 | orchestrator | 2026-02-02 06:20:34.404301 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 06:20:34.404308 | orchestrator | Monday 02 February 2026 06:20:28 +0000 (0:00:01.135) 0:46:55.693 ******* 2026-02-02 06:20:34.404315 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.404322 | orchestrator | 2026-02-02 06:20:34.404330 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 06:20:34.404337 | orchestrator | Monday 02 February 2026 06:20:28 +0000 (0:00:00.789) 0:46:56.483 ******* 2026-02-02 06:20:34.404344 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:20:34.404351 | orchestrator | 2026-02-02 06:20:34.404358 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 06:20:34.404366 | orchestrator | Monday 02 February 2026 06:20:31 +0000 (0:00:02.123) 0:46:58.606 ******* 2026-02-02 06:20:34.404373 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:20:34.404380 | orchestrator | 2026-02-02 06:20:34.404387 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 06:20:34.404394 | orchestrator | Monday 02 February 2026 06:20:31 +0000 (0:00:00.764) 0:46:59.370 ******* 2026-02-02 06:20:34.404401 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-02 06:20:34.404409 | orchestrator | 2026-02-02 06:20:34.404416 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 06:20:34.404423 | orchestrator | Monday 02 February 2026 06:20:33 +0000 (0:00:01.421) 0:47:00.792 ******* 2026-02-02 06:20:34.404430 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:20:34.404437 | orchestrator | 2026-02-02 06:20:34.404445 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 06:20:34.404459 | orchestrator | Monday 02 February 2026 06:20:34 +0000 (0:00:01.181) 0:47:01.973 ******* 2026-02-02 06:21:18.846747 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.846860 | orchestrator | 2026-02-02 06:21:18.846879 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 06:21:18.846893 | orchestrator | Monday 02 February 2026 06:20:35 +0000 (0:00:01.174) 0:47:03.148 ******* 2026-02-02 06:21:18.846905 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.846916 | orchestrator | 2026-02-02 06:21:18.846928 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 06:21:18.846939 | orchestrator | Monday 02 February 2026 06:20:36 +0000 (0:00:01.136) 0:47:04.284 ******* 2026-02-02 06:21:18.846950 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.846961 | orchestrator | 2026-02-02 06:21:18.846972 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 06:21:18.846984 | orchestrator | Monday 02 February 2026 06:20:37 +0000 (0:00:01.141) 0:47:05.426 ******* 2026-02-02 06:21:18.846994 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.847006 | orchestrator | 2026-02-02 06:21:18.847017 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 06:21:18.847028 | orchestrator | Monday 02 February 2026 06:20:38 +0000 (0:00:01.146) 0:47:06.573 ******* 2026-02-02 06:21:18.847039 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.847050 | orchestrator | 2026-02-02 06:21:18.847061 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 06:21:18.847072 | orchestrator | Monday 02 February 2026 06:20:40 +0000 (0:00:01.162) 0:47:07.735 ******* 2026-02-02 06:21:18.847083 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.847094 | orchestrator | 2026-02-02 06:21:18.847105 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 06:21:18.847116 | orchestrator | Monday 02 February 2026 06:20:41 +0000 (0:00:01.184) 0:47:08.920 ******* 2026-02-02 06:21:18.847127 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.847226 | orchestrator | 2026-02-02 06:21:18.847241 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 06:21:18.847252 | orchestrator | Monday 02 February 2026 06:20:42 +0000 (0:00:01.111) 0:47:10.032 ******* 2026-02-02 06:21:18.847263 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:21:18.847275 | orchestrator | 2026-02-02 06:21:18.847288 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 06:21:18.847302 | orchestrator | Monday 02 February 2026 06:20:43 +0000 (0:00:00.833) 0:47:10.866 ******* 2026-02-02 06:21:18.847331 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-02 06:21:18.847346 | orchestrator | 2026-02-02 06:21:18.847359 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 06:21:18.847371 | orchestrator | Monday 02 February 2026 06:20:44 +0000 (0:00:01.131) 0:47:11.998 ******* 2026-02-02 06:21:18.847384 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-02 06:21:18.847397 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-02 06:21:18.847410 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-02 06:21:18.847422 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-02 06:21:18.847435 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-02 06:21:18.847447 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-02 06:21:18.847460 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-02 06:21:18.847472 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-02 06:21:18.847485 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 06:21:18.847497 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 06:21:18.847509 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 06:21:18.847521 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 06:21:18.847534 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 06:21:18.847547 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 06:21:18.847559 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-02 06:21:18.847573 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-02 06:21:18.847586 | orchestrator | 2026-02-02 06:21:18.847598 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 06:21:18.847610 | orchestrator | Monday 02 February 2026 06:20:51 +0000 (0:00:06.598) 0:47:18.596 ******* 2026-02-02 06:21:18.847623 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-02 06:21:18.847637 | orchestrator | 2026-02-02 06:21:18.847647 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-02 06:21:18.847658 | orchestrator | Monday 02 February 2026 06:20:52 +0000 (0:00:01.112) 0:47:19.709 ******* 2026-02-02 06:21:18.847669 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 06:21:18.847681 | orchestrator | 2026-02-02 06:21:18.847692 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-02 06:21:18.847702 | orchestrator | Monday 02 February 2026 06:20:53 +0000 (0:00:01.476) 0:47:21.185 ******* 2026-02-02 06:21:18.847713 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 06:21:18.847724 | orchestrator | 2026-02-02 06:21:18.847735 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 06:21:18.847745 | orchestrator | Monday 02 February 2026 06:20:55 +0000 (0:00:01.654) 0:47:22.840 ******* 2026-02-02 06:21:18.847756 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.847767 | orchestrator | 2026-02-02 06:21:18.847777 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 06:21:18.847807 | orchestrator | Monday 02 February 2026 06:20:56 +0000 (0:00:00.874) 0:47:23.715 ******* 2026-02-02 06:21:18.847828 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.847839 | orchestrator | 2026-02-02 06:21:18.847850 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 06:21:18.847860 | orchestrator | Monday 02 February 2026 06:20:56 +0000 (0:00:00.777) 0:47:24.492 ******* 2026-02-02 06:21:18.847871 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.847882 | orchestrator | 2026-02-02 06:21:18.847892 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 06:21:18.847903 | orchestrator | Monday 02 February 2026 06:20:57 +0000 (0:00:00.801) 0:47:25.294 ******* 2026-02-02 06:21:18.847913 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.847924 | orchestrator | 2026-02-02 06:21:18.847935 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 06:21:18.847945 | orchestrator | Monday 02 February 2026 06:20:58 +0000 (0:00:00.822) 0:47:26.116 ******* 2026-02-02 06:21:18.847956 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.847967 | orchestrator | 2026-02-02 06:21:18.847978 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 06:21:18.847988 | orchestrator | Monday 02 February 2026 06:20:59 +0000 (0:00:00.840) 0:47:26.957 ******* 2026-02-02 06:21:18.847999 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.848010 | orchestrator | 2026-02-02 06:21:18.848021 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 06:21:18.848031 | orchestrator | Monday 02 February 2026 06:21:00 +0000 (0:00:00.780) 0:47:27.737 ******* 2026-02-02 06:21:18.848042 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.848052 | orchestrator | 2026-02-02 06:21:18.848063 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 06:21:18.848074 | orchestrator | Monday 02 February 2026 06:21:00 +0000 (0:00:00.804) 0:47:28.542 ******* 2026-02-02 06:21:18.848085 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.848097 | orchestrator | 2026-02-02 06:21:18.848115 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 06:21:18.848132 | orchestrator | Monday 02 February 2026 06:21:01 +0000 (0:00:00.868) 0:47:29.410 ******* 2026-02-02 06:21:18.848150 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.848196 | orchestrator | 2026-02-02 06:21:18.848220 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 06:21:18.848232 | orchestrator | Monday 02 February 2026 06:21:02 +0000 (0:00:00.851) 0:47:30.262 ******* 2026-02-02 06:21:18.848243 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.848253 | orchestrator | 2026-02-02 06:21:18.848264 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 06:21:18.848275 | orchestrator | Monday 02 February 2026 06:21:03 +0000 (0:00:00.818) 0:47:31.080 ******* 2026-02-02 06:21:18.848286 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:21:18.848296 | orchestrator | 2026-02-02 06:21:18.848307 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 06:21:18.848318 | orchestrator | Monday 02 February 2026 06:21:04 +0000 (0:00:00.834) 0:47:31.914 ******* 2026-02-02 06:21:18.848329 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-02 06:21:18.848339 | orchestrator | 2026-02-02 06:21:18.848350 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 06:21:18.848361 | orchestrator | Monday 02 February 2026 06:21:08 +0000 (0:00:04.048) 0:47:35.962 ******* 2026-02-02 06:21:18.848371 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 06:21:18.848382 | orchestrator | 2026-02-02 06:21:18.848393 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 06:21:18.848404 | orchestrator | Monday 02 February 2026 06:21:09 +0000 (0:00:00.806) 0:47:36.768 ******* 2026-02-02 06:21:18.848424 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-02 06:21:18.848438 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-02 06:21:18.848451 | orchestrator | 2026-02-02 06:21:18.848462 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 06:21:18.848472 | orchestrator | Monday 02 February 2026 06:21:16 +0000 (0:00:07.283) 0:47:44.052 ******* 2026-02-02 06:21:18.848483 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.848494 | orchestrator | 2026-02-02 06:21:18.848505 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 06:21:18.848515 | orchestrator | Monday 02 February 2026 06:21:17 +0000 (0:00:00.818) 0:47:44.871 ******* 2026-02-02 06:21:18.848526 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.848537 | orchestrator | 2026-02-02 06:21:18.848548 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:21:18.848559 | orchestrator | Monday 02 February 2026 06:21:18 +0000 (0:00:00.759) 0:47:45.631 ******* 2026-02-02 06:21:18.848569 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:21:18.848580 | orchestrator | 2026-02-02 06:21:18.848591 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:21:18.848609 | orchestrator | Monday 02 February 2026 06:21:18 +0000 (0:00:00.787) 0:47:46.419 ******* 2026-02-02 06:22:05.444827 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:22:05.444926 | orchestrator | 2026-02-02 06:22:05.444940 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:22:05.444956 | orchestrator | Monday 02 February 2026 06:21:19 +0000 (0:00:00.793) 0:47:47.212 ******* 2026-02-02 06:22:05.444971 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:22:05.444986 | orchestrator | 2026-02-02 06:22:05.445005 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:22:05.445021 | orchestrator | Monday 02 February 2026 06:21:20 +0000 (0:00:00.782) 0:47:47.995 ******* 2026-02-02 06:22:05.445035 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:22:05.445050 | orchestrator | 2026-02-02 06:22:05.445066 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:22:05.445080 | orchestrator | Monday 02 February 2026 06:21:21 +0000 (0:00:00.946) 0:47:48.942 ******* 2026-02-02 06:22:05.445094 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:22:05.445109 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:22:05.445123 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:22:05.445137 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:22:05.445151 | orchestrator | 2026-02-02 06:22:05.445164 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:22:05.445178 | orchestrator | Monday 02 February 2026 06:21:22 +0000 (0:00:01.400) 0:47:50.343 ******* 2026-02-02 06:22:05.445192 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:22:05.445283 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:22:05.445298 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:22:05.445314 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:22:05.445329 | orchestrator | 2026-02-02 06:22:05.445344 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:22:05.445359 | orchestrator | Monday 02 February 2026 06:21:24 +0000 (0:00:01.535) 0:47:51.878 ******* 2026-02-02 06:22:05.445402 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:22:05.445413 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:22:05.445437 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:22:05.445447 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:22:05.445457 | orchestrator | 2026-02-02 06:22:05.445468 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:22:05.445482 | orchestrator | Monday 02 February 2026 06:21:25 +0000 (0:00:01.114) 0:47:52.992 ******* 2026-02-02 06:22:05.445496 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:22:05.445511 | orchestrator | 2026-02-02 06:22:05.445526 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:22:05.445540 | orchestrator | Monday 02 February 2026 06:21:26 +0000 (0:00:00.795) 0:47:53.787 ******* 2026-02-02 06:22:05.445554 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-02 06:22:05.445570 | orchestrator | 2026-02-02 06:22:05.445585 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 06:22:05.445601 | orchestrator | Monday 02 February 2026 06:21:27 +0000 (0:00:00.969) 0:47:54.757 ******* 2026-02-02 06:22:05.445614 | orchestrator | changed: [testbed-node-5] 2026-02-02 06:22:05.445624 | orchestrator | 2026-02-02 06:22:05.445634 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-02 06:22:05.445645 | orchestrator | Monday 02 February 2026 06:21:28 +0000 (0:00:01.403) 0:47:56.160 ******* 2026-02-02 06:22:05.445655 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:22:05.445665 | orchestrator | 2026-02-02 06:22:05.445675 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-02 06:22:05.445686 | orchestrator | Monday 02 February 2026 06:21:29 +0000 (0:00:00.815) 0:47:56.976 ******* 2026-02-02 06:22:05.445696 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:22:05.445707 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:22:05.445717 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:22:05.445727 | orchestrator | 2026-02-02 06:22:05.445737 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-02 06:22:05.445746 | orchestrator | Monday 02 February 2026 06:21:31 +0000 (0:00:01.678) 0:47:58.655 ******* 2026-02-02 06:22:05.445754 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-02-02 06:22:05.445763 | orchestrator | 2026-02-02 06:22:05.445772 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-02 06:22:05.445780 | orchestrator | Monday 02 February 2026 06:21:32 +0000 (0:00:01.200) 0:47:59.855 ******* 2026-02-02 06:22:05.445789 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:22:05.445797 | orchestrator | 2026-02-02 06:22:05.445806 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-02 06:22:05.445814 | orchestrator | Monday 02 February 2026 06:21:33 +0000 (0:00:01.211) 0:48:01.067 ******* 2026-02-02 06:22:05.445823 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:22:05.445832 | orchestrator | 2026-02-02 06:22:05.445840 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-02 06:22:05.445849 | orchestrator | Monday 02 February 2026 06:21:34 +0000 (0:00:01.201) 0:48:02.268 ******* 2026-02-02 06:22:05.445857 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:22:05.445866 | orchestrator | 2026-02-02 06:22:05.445874 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-02 06:22:05.445883 | orchestrator | Monday 02 February 2026 06:21:36 +0000 (0:00:01.564) 0:48:03.832 ******* 2026-02-02 06:22:05.445892 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:22:05.445900 | orchestrator | 2026-02-02 06:22:05.445911 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-02 06:22:05.445927 | orchestrator | Monday 02 February 2026 06:21:37 +0000 (0:00:01.180) 0:48:05.013 ******* 2026-02-02 06:22:05.445977 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-02 06:22:05.445995 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-02 06:22:05.446011 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-02 06:22:05.446083 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-02 06:22:05.446093 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-02 06:22:05.446102 | orchestrator | 2026-02-02 06:22:05.446111 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-02 06:22:05.446119 | orchestrator | Monday 02 February 2026 06:21:39 +0000 (0:00:02.514) 0:48:07.527 ******* 2026-02-02 06:22:05.446128 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:22:05.446136 | orchestrator | 2026-02-02 06:22:05.446145 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-02 06:22:05.446154 | orchestrator | Monday 02 February 2026 06:21:40 +0000 (0:00:00.803) 0:48:08.331 ******* 2026-02-02 06:22:05.446162 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-02-02 06:22:05.446171 | orchestrator | 2026-02-02 06:22:05.446179 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-02 06:22:05.446188 | orchestrator | Monday 02 February 2026 06:21:41 +0000 (0:00:01.137) 0:48:09.469 ******* 2026-02-02 06:22:05.446196 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-02 06:22:05.446230 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-02 06:22:05.446239 | orchestrator | 2026-02-02 06:22:05.446248 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-02 06:22:05.446256 | orchestrator | Monday 02 February 2026 06:21:43 +0000 (0:00:01.852) 0:48:11.321 ******* 2026-02-02 06:22:05.446265 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:22:05.446273 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-02 06:22:05.446282 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 06:22:05.446291 | orchestrator | 2026-02-02 06:22:05.446306 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:22:05.446315 | orchestrator | Monday 02 February 2026 06:21:46 +0000 (0:00:03.161) 0:48:14.482 ******* 2026-02-02 06:22:05.446324 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-02 06:22:05.446333 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-02 06:22:05.446341 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:22:05.446350 | orchestrator | 2026-02-02 06:22:05.446359 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-02 06:22:05.446368 | orchestrator | Monday 02 February 2026 06:21:48 +0000 (0:00:01.595) 0:48:16.078 ******* 2026-02-02 06:22:05.446376 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:22:05.446385 | orchestrator | 2026-02-02 06:22:05.446395 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-02 06:22:05.446409 | orchestrator | Monday 02 February 2026 06:21:49 +0000 (0:00:00.916) 0:48:16.994 ******* 2026-02-02 06:22:05.446423 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:22:05.446437 | orchestrator | 2026-02-02 06:22:05.446452 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-02 06:22:05.446466 | orchestrator | Monday 02 February 2026 06:21:50 +0000 (0:00:00.755) 0:48:17.750 ******* 2026-02-02 06:22:05.446480 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:22:05.446494 | orchestrator | 2026-02-02 06:22:05.446503 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-02 06:22:05.446512 | orchestrator | Monday 02 February 2026 06:21:50 +0000 (0:00:00.762) 0:48:18.513 ******* 2026-02-02 06:22:05.446520 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-02-02 06:22:05.446538 | orchestrator | 2026-02-02 06:22:05.446546 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-02 06:22:05.446555 | orchestrator | Monday 02 February 2026 06:21:52 +0000 (0:00:01.274) 0:48:19.787 ******* 2026-02-02 06:22:05.446564 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:22:05.446572 | orchestrator | 2026-02-02 06:22:05.446581 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-02 06:22:05.446589 | orchestrator | Monday 02 February 2026 06:21:53 +0000 (0:00:01.434) 0:48:21.222 ******* 2026-02-02 06:22:05.446598 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:22:05.446606 | orchestrator | 2026-02-02 06:22:05.446615 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-02 06:22:05.446624 | orchestrator | Monday 02 February 2026 06:21:56 +0000 (0:00:03.350) 0:48:24.573 ******* 2026-02-02 06:22:05.446632 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-02-02 06:22:05.446641 | orchestrator | 2026-02-02 06:22:05.446649 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-02 06:22:05.446658 | orchestrator | Monday 02 February 2026 06:21:58 +0000 (0:00:01.124) 0:48:25.697 ******* 2026-02-02 06:22:05.446667 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:22:05.446675 | orchestrator | 2026-02-02 06:22:05.446684 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-02 06:22:05.446692 | orchestrator | Monday 02 February 2026 06:22:00 +0000 (0:00:01.906) 0:48:27.604 ******* 2026-02-02 06:22:05.446701 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:22:05.446710 | orchestrator | 2026-02-02 06:22:05.446721 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-02 06:22:05.446736 | orchestrator | Monday 02 February 2026 06:22:02 +0000 (0:00:02.033) 0:48:29.637 ******* 2026-02-02 06:22:05.446749 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:22:05.446763 | orchestrator | 2026-02-02 06:22:05.446777 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-02 06:22:05.446789 | orchestrator | Monday 02 February 2026 06:22:04 +0000 (0:00:02.189) 0:48:31.827 ******* 2026-02-02 06:22:05.446802 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:22:05.446816 | orchestrator | 2026-02-02 06:22:05.446845 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-02 06:24:19.112369 | orchestrator | Monday 02 February 2026 06:22:05 +0000 (0:00:01.181) 0:48:33.008 ******* 2026-02-02 06:24:19.112487 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:24:19.112503 | orchestrator | 2026-02-02 06:24:19.112517 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-02 06:24:19.112528 | orchestrator | Monday 02 February 2026 06:22:06 +0000 (0:00:01.118) 0:48:34.127 ******* 2026-02-02 06:24:19.112539 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-02 06:24:19.112550 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-02 06:24:19.112561 | orchestrator | 2026-02-02 06:24:19.112572 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-02 06:24:19.112583 | orchestrator | Monday 02 February 2026 06:22:08 +0000 (0:00:01.792) 0:48:35.920 ******* 2026-02-02 06:24:19.112593 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-02 06:24:19.112604 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-02 06:24:19.112615 | orchestrator | 2026-02-02 06:24:19.112625 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-02 06:24:19.112636 | orchestrator | Monday 02 February 2026 06:22:11 +0000 (0:00:02.815) 0:48:38.735 ******* 2026-02-02 06:24:19.112647 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-02 06:24:19.112658 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-02 06:24:19.112669 | orchestrator | 2026-02-02 06:24:19.112680 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-02 06:24:19.112690 | orchestrator | Monday 02 February 2026 06:22:15 +0000 (0:00:04.345) 0:48:43.081 ******* 2026-02-02 06:24:19.112701 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:24:19.112712 | orchestrator | 2026-02-02 06:24:19.112746 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-02 06:24:19.112758 | orchestrator | Monday 02 February 2026 06:22:16 +0000 (0:00:00.924) 0:48:44.006 ******* 2026-02-02 06:24:19.112769 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-02 06:24:19.112780 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:24:19.112791 | orchestrator | 2026-02-02 06:24:19.112802 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-02 06:24:19.112826 | orchestrator | Monday 02 February 2026 06:22:29 +0000 (0:00:13.416) 0:48:57.422 ******* 2026-02-02 06:24:19.112837 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:24:19.112847 | orchestrator | 2026-02-02 06:24:19.112858 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-02 06:24:19.112869 | orchestrator | Monday 02 February 2026 06:22:30 +0000 (0:00:00.939) 0:48:58.362 ******* 2026-02-02 06:24:19.112879 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:24:19.112890 | orchestrator | 2026-02-02 06:24:19.112901 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-02 06:24:19.112913 | orchestrator | Monday 02 February 2026 06:22:31 +0000 (0:00:00.763) 0:48:59.125 ******* 2026-02-02 06:24:19.112923 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:24:19.112934 | orchestrator | 2026-02-02 06:24:19.112944 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-02 06:24:19.112955 | orchestrator | Monday 02 February 2026 06:22:32 +0000 (0:00:00.774) 0:48:59.900 ******* 2026-02-02 06:24:19.112966 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-02 06:24:19.112976 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:24:19.112987 | orchestrator | 2026-02-02 06:24:19.112997 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-02 06:24:19.113008 | orchestrator | Monday 02 February 2026 06:22:37 +0000 (0:00:04.906) 0:49:04.806 ******* 2026-02-02 06:24:19.113018 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:24:19.113029 | orchestrator | 2026-02-02 06:24:19.113039 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-02 06:24:19.113050 | orchestrator | Monday 02 February 2026 06:22:38 +0000 (0:00:00.800) 0:49:05.607 ******* 2026-02-02 06:24:19.113060 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:24:19.113071 | orchestrator | 2026-02-02 06:24:19.113081 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-02 06:24:19.113092 | orchestrator | Monday 02 February 2026 06:22:38 +0000 (0:00:00.770) 0:49:06.377 ******* 2026-02-02 06:24:19.113103 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:24:19.113113 | orchestrator | 2026-02-02 06:24:19.113124 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-02 06:24:19.113134 | orchestrator | Monday 02 February 2026 06:22:39 +0000 (0:00:00.767) 0:49:07.145 ******* 2026-02-02 06:24:19.113145 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:24:19.113155 | orchestrator | 2026-02-02 06:24:19.113166 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-02 06:24:19.113176 | orchestrator | Monday 02 February 2026 06:22:40 +0000 (0:00:00.770) 0:49:07.915 ******* 2026-02-02 06:24:19.113187 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:24:19.113197 | orchestrator | 2026-02-02 06:24:19.113208 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-02 06:24:19.113218 | orchestrator | Monday 02 February 2026 06:22:41 +0000 (0:00:00.803) 0:49:08.718 ******* 2026-02-02 06:24:19.113229 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:24:19.113239 | orchestrator | 2026-02-02 06:24:19.113250 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-02 06:24:19.113261 | orchestrator | Monday 02 February 2026 06:22:41 +0000 (0:00:00.760) 0:49:09.479 ******* 2026-02-02 06:24:19.113271 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:24:19.113316 | orchestrator | 2026-02-02 06:24:19.113329 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-02-02 06:24:19.113340 | orchestrator | 2026-02-02 06:24:19.113350 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:24:19.113361 | orchestrator | Monday 02 February 2026 06:22:43 +0000 (0:00:01.798) 0:49:11.277 ******* 2026-02-02 06:24:19.113372 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:24:19.113382 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:24:19.113408 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:24:19.113419 | orchestrator | 2026-02-02 06:24:19.113430 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:24:19.113441 | orchestrator | Monday 02 February 2026 06:22:45 +0000 (0:00:01.745) 0:49:13.023 ******* 2026-02-02 06:24:19.113451 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:24:19.113462 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:24:19.113472 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:24:19.113483 | orchestrator | 2026-02-02 06:24:19.113493 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-02-02 06:24:19.113504 | orchestrator | Monday 02 February 2026 06:22:46 +0000 (0:00:01.371) 0:49:14.394 ******* 2026-02-02 06:24:19.113515 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-02 06:24:19.113525 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-02 06:24:19.113537 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-02 06:24:19.113548 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-02 06:24:19.113559 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-02 06:24:19.113570 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-02 06:24:19.113581 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-02 06:24:19.113592 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-02 06:24:19.113608 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-02 06:24:19.113618 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-02 06:24:19.113629 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-02 06:24:19.113640 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-02 06:24:19.113650 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-02 06:24:19.113661 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-02 06:24:19.113671 | orchestrator | 2026-02-02 06:24:19.113682 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-02-02 06:24:19.113693 | orchestrator | Monday 02 February 2026 06:24:01 +0000 (0:01:14.328) 0:50:28.722 ******* 2026-02-02 06:24:19.113703 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-02 06:24:19.113714 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-02 06:24:19.113724 | orchestrator | 2026-02-02 06:24:19.113735 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-02-02 06:24:19.113745 | orchestrator | Monday 02 February 2026 06:24:06 +0000 (0:00:05.521) 0:50:34.244 ******* 2026-02-02 06:24:19.113756 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:24:19.113766 | orchestrator | 2026-02-02 06:24:19.113777 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-02-02 06:24:19.113795 | orchestrator | 2026-02-02 06:24:19.113805 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 06:24:19.113816 | orchestrator | Monday 02 February 2026 06:24:09 +0000 (0:00:03.129) 0:50:37.374 ******* 2026-02-02 06:24:19.113826 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-02 06:24:19.113837 | orchestrator | 2026-02-02 06:24:19.113847 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 06:24:19.113858 | orchestrator | Monday 02 February 2026 06:24:10 +0000 (0:00:01.181) 0:50:38.555 ******* 2026-02-02 06:24:19.113869 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:19.113879 | orchestrator | 2026-02-02 06:24:19.113890 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 06:24:19.113900 | orchestrator | Monday 02 February 2026 06:24:12 +0000 (0:00:01.479) 0:50:40.035 ******* 2026-02-02 06:24:19.113911 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:19.113922 | orchestrator | 2026-02-02 06:24:19.113932 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:24:19.113943 | orchestrator | Monday 02 February 2026 06:24:13 +0000 (0:00:01.134) 0:50:41.169 ******* 2026-02-02 06:24:19.113953 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:19.113964 | orchestrator | 2026-02-02 06:24:19.113974 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:24:19.113985 | orchestrator | Monday 02 February 2026 06:24:15 +0000 (0:00:01.547) 0:50:42.717 ******* 2026-02-02 06:24:19.113995 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:19.114006 | orchestrator | 2026-02-02 06:24:19.114074 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 06:24:19.114087 | orchestrator | Monday 02 February 2026 06:24:16 +0000 (0:00:01.614) 0:50:44.332 ******* 2026-02-02 06:24:19.114098 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:19.114109 | orchestrator | 2026-02-02 06:24:19.114119 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 06:24:19.114130 | orchestrator | Monday 02 February 2026 06:24:17 +0000 (0:00:01.167) 0:50:45.500 ******* 2026-02-02 06:24:19.114140 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:19.114151 | orchestrator | 2026-02-02 06:24:19.114168 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 06:24:43.099894 | orchestrator | Monday 02 February 2026 06:24:19 +0000 (0:00:01.182) 0:50:46.682 ******* 2026-02-02 06:24:43.100029 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:43.100051 | orchestrator | 2026-02-02 06:24:43.100068 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 06:24:43.100087 | orchestrator | Monday 02 February 2026 06:24:20 +0000 (0:00:01.156) 0:50:47.838 ******* 2026-02-02 06:24:43.100107 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:43.100127 | orchestrator | 2026-02-02 06:24:43.100142 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 06:24:43.100153 | orchestrator | Monday 02 February 2026 06:24:21 +0000 (0:00:01.140) 0:50:48.979 ******* 2026-02-02 06:24:43.100164 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 06:24:43.100176 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:24:43.100190 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:24:43.100209 | orchestrator | 2026-02-02 06:24:43.100229 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 06:24:43.100247 | orchestrator | Monday 02 February 2026 06:24:23 +0000 (0:00:01.701) 0:50:50.681 ******* 2026-02-02 06:24:43.100266 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:43.100286 | orchestrator | 2026-02-02 06:24:43.100353 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 06:24:43.100374 | orchestrator | Monday 02 February 2026 06:24:24 +0000 (0:00:01.225) 0:50:51.906 ******* 2026-02-02 06:24:43.100392 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 06:24:43.100446 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:24:43.100468 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:24:43.100488 | orchestrator | 2026-02-02 06:24:43.100508 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 06:24:43.100528 | orchestrator | Monday 02 February 2026 06:24:27 +0000 (0:00:02.855) 0:50:54.762 ******* 2026-02-02 06:24:43.100568 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 06:24:43.100585 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 06:24:43.100599 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 06:24:43.100619 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:43.100639 | orchestrator | 2026-02-02 06:24:43.100658 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 06:24:43.100675 | orchestrator | Monday 02 February 2026 06:24:28 +0000 (0:00:01.422) 0:50:56.184 ******* 2026-02-02 06:24:43.100691 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 06:24:43.100707 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 06:24:43.100721 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 06:24:43.100734 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:43.100747 | orchestrator | 2026-02-02 06:24:43.100760 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 06:24:43.100778 | orchestrator | Monday 02 February 2026 06:24:30 +0000 (0:00:01.711) 0:50:57.896 ******* 2026-02-02 06:24:43.100801 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:24:43.100823 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:24:43.100836 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:24:43.100866 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:43.100878 | orchestrator | 2026-02-02 06:24:43.100889 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 06:24:43.100900 | orchestrator | Monday 02 February 2026 06:24:31 +0000 (0:00:01.233) 0:50:59.129 ******* 2026-02-02 06:24:43.100913 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 06:24:24.827034', 'end': '2026-02-02 06:24:24.879193', 'delta': '0:00:00.052159', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 06:24:43.100951 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c530967d0aad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 06:24:25.450027', 'end': '2026-02-02 06:24:25.481046', 'delta': '0:00:00.031019', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c530967d0aad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 06:24:43.100963 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a68c96a70534', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 06:24:26.022995', 'end': '2026-02-02 06:24:26.072345', 'delta': '0:00:00.049350', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a68c96a70534'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 06:24:43.100974 | orchestrator | 2026-02-02 06:24:43.100985 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 06:24:43.100996 | orchestrator | Monday 02 February 2026 06:24:32 +0000 (0:00:01.250) 0:51:00.380 ******* 2026-02-02 06:24:43.101008 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:43.101027 | orchestrator | 2026-02-02 06:24:43.101047 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 06:24:43.101059 | orchestrator | Monday 02 February 2026 06:24:34 +0000 (0:00:01.236) 0:51:01.617 ******* 2026-02-02 06:24:43.101070 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:43.101081 | orchestrator | 2026-02-02 06:24:43.101092 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 06:24:43.101102 | orchestrator | Monday 02 February 2026 06:24:35 +0000 (0:00:01.276) 0:51:02.894 ******* 2026-02-02 06:24:43.101114 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:43.101125 | orchestrator | 2026-02-02 06:24:43.101136 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 06:24:43.101146 | orchestrator | Monday 02 February 2026 06:24:36 +0000 (0:00:01.125) 0:51:04.020 ******* 2026-02-02 06:24:43.101157 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:43.101168 | orchestrator | 2026-02-02 06:24:43.101179 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:24:43.101189 | orchestrator | Monday 02 February 2026 06:24:38 +0000 (0:00:01.956) 0:51:05.976 ******* 2026-02-02 06:24:43.101200 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:43.101211 | orchestrator | 2026-02-02 06:24:43.101222 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 06:24:43.101232 | orchestrator | Monday 02 February 2026 06:24:39 +0000 (0:00:01.158) 0:51:07.135 ******* 2026-02-02 06:24:43.101243 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:43.101261 | orchestrator | 2026-02-02 06:24:43.101271 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 06:24:43.101282 | orchestrator | Monday 02 February 2026 06:24:40 +0000 (0:00:01.190) 0:51:08.326 ******* 2026-02-02 06:24:43.101318 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:43.101333 | orchestrator | 2026-02-02 06:24:43.101344 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:24:43.101355 | orchestrator | Monday 02 February 2026 06:24:41 +0000 (0:00:01.213) 0:51:09.539 ******* 2026-02-02 06:24:43.101372 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:43.101391 | orchestrator | 2026-02-02 06:24:43.101422 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 06:24:52.670715 | orchestrator | Monday 02 February 2026 06:24:43 +0000 (0:00:01.131) 0:51:10.671 ******* 2026-02-02 06:24:52.670852 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:52.670878 | orchestrator | 2026-02-02 06:24:52.670897 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 06:24:52.670914 | orchestrator | Monday 02 February 2026 06:24:44 +0000 (0:00:01.157) 0:51:11.829 ******* 2026-02-02 06:24:52.670931 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:52.670947 | orchestrator | 2026-02-02 06:24:52.670963 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 06:24:52.670980 | orchestrator | Monday 02 February 2026 06:24:45 +0000 (0:00:01.129) 0:51:12.959 ******* 2026-02-02 06:24:52.670997 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:52.671013 | orchestrator | 2026-02-02 06:24:52.671024 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 06:24:52.671033 | orchestrator | Monday 02 February 2026 06:24:46 +0000 (0:00:01.204) 0:51:14.163 ******* 2026-02-02 06:24:52.671043 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:52.671053 | orchestrator | 2026-02-02 06:24:52.671063 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 06:24:52.671073 | orchestrator | Monday 02 February 2026 06:24:47 +0000 (0:00:01.134) 0:51:15.298 ******* 2026-02-02 06:24:52.671082 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:52.671092 | orchestrator | 2026-02-02 06:24:52.671101 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 06:24:52.671112 | orchestrator | Monday 02 February 2026 06:24:48 +0000 (0:00:01.165) 0:51:16.463 ******* 2026-02-02 06:24:52.671121 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:52.671131 | orchestrator | 2026-02-02 06:24:52.671140 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 06:24:52.671154 | orchestrator | Monday 02 February 2026 06:24:50 +0000 (0:00:01.139) 0:51:17.602 ******* 2026-02-02 06:24:52.671194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:24:52.671216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:24:52.671235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:24:52.671280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 06:24:52.671323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:24:52.671342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:24:52.671376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:24:52.671401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91f9e36e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:24:52.671423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:24:52.671433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:24:52.671443 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:52.671455 | orchestrator | 2026-02-02 06:24:52.671472 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 06:24:52.671493 | orchestrator | Monday 02 February 2026 06:24:51 +0000 (0:00:01.364) 0:51:18.967 ******* 2026-02-02 06:24:52.671527 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:24:56.882930 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:24:56.883035 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:24:56.883070 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:24:56.883086 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:24:56.883136 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:24:56.883158 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:24:56.883216 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91f9e36e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f9e36e-a0b2-48b8-b319-344a0ffd6bbe-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:24:56.883252 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:24:56.883273 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:24:56.883294 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:24:56.883378 | orchestrator | 2026-02-02 06:24:56.883400 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 06:24:56.883419 | orchestrator | Monday 02 February 2026 06:24:52 +0000 (0:00:01.283) 0:51:20.250 ******* 2026-02-02 06:24:56.883439 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:56.883458 | orchestrator | 2026-02-02 06:24:56.883473 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 06:24:56.883486 | orchestrator | Monday 02 February 2026 06:24:54 +0000 (0:00:01.504) 0:51:21.754 ******* 2026-02-02 06:24:56.883499 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:56.883511 | orchestrator | 2026-02-02 06:24:56.883523 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:24:56.883535 | orchestrator | Monday 02 February 2026 06:24:55 +0000 (0:00:01.154) 0:51:22.909 ******* 2026-02-02 06:24:56.883548 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:24:56.883561 | orchestrator | 2026-02-02 06:24:56.883573 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:24:56.883598 | orchestrator | Monday 02 February 2026 06:24:56 +0000 (0:00:01.548) 0:51:24.457 ******* 2026-02-02 06:25:50.466840 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:25:50.466956 | orchestrator | 2026-02-02 06:25:50.466975 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:25:50.466989 | orchestrator | Monday 02 February 2026 06:24:57 +0000 (0:00:01.109) 0:51:25.567 ******* 2026-02-02 06:25:50.467001 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:25:50.467013 | orchestrator | 2026-02-02 06:25:50.467024 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:25:50.467036 | orchestrator | Monday 02 February 2026 06:24:59 +0000 (0:00:01.261) 0:51:26.828 ******* 2026-02-02 06:25:50.467047 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:25:50.467058 | orchestrator | 2026-02-02 06:25:50.467069 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 06:25:50.467080 | orchestrator | Monday 02 February 2026 06:25:00 +0000 (0:00:01.135) 0:51:27.964 ******* 2026-02-02 06:25:50.467091 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 06:25:50.467103 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-02 06:25:50.467113 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-02 06:25:50.467124 | orchestrator | 2026-02-02 06:25:50.467135 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 06:25:50.467146 | orchestrator | Monday 02 February 2026 06:25:02 +0000 (0:00:01.717) 0:51:29.682 ******* 2026-02-02 06:25:50.467179 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-02 06:25:50.467191 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-02 06:25:50.467202 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-02 06:25:50.467213 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:25:50.467224 | orchestrator | 2026-02-02 06:25:50.467235 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 06:25:50.467246 | orchestrator | Monday 02 February 2026 06:25:03 +0000 (0:00:01.236) 0:51:30.919 ******* 2026-02-02 06:25:50.467256 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:25:50.467267 | orchestrator | 2026-02-02 06:25:50.467293 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 06:25:50.467304 | orchestrator | Monday 02 February 2026 06:25:04 +0000 (0:00:01.186) 0:51:32.106 ******* 2026-02-02 06:25:50.467315 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 06:25:50.467349 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:25:50.467362 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:25:50.467373 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:25:50.467385 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:25:50.467397 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:25:50.467410 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:25:50.467422 | orchestrator | 2026-02-02 06:25:50.467435 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 06:25:50.467448 | orchestrator | Monday 02 February 2026 06:25:06 +0000 (0:00:02.267) 0:51:34.373 ******* 2026-02-02 06:25:50.467461 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-02 06:25:50.467473 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:25:50.467484 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:25:50.467495 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:25:50.467505 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:25:50.467516 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:25:50.467527 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:25:50.467538 | orchestrator | 2026-02-02 06:25:50.467549 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-02-02 06:25:50.467560 | orchestrator | Monday 02 February 2026 06:25:09 +0000 (0:00:02.677) 0:51:37.050 ******* 2026-02-02 06:25:50.467572 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:25:50.467583 | orchestrator | 2026-02-02 06:25:50.467594 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-02-02 06:25:50.467627 | orchestrator | Monday 02 February 2026 06:25:12 +0000 (0:00:03.161) 0:51:40.212 ******* 2026-02-02 06:25:50.467639 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:25:50.467650 | orchestrator | 2026-02-02 06:25:50.467660 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-02-02 06:25:50.467671 | orchestrator | Monday 02 February 2026 06:25:15 +0000 (0:00:02.946) 0:51:43.159 ******* 2026-02-02 06:25:50.467682 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:25:50.467693 | orchestrator | 2026-02-02 06:25:50.467704 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-02-02 06:25:50.467714 | orchestrator | Monday 02 February 2026 06:25:18 +0000 (0:00:02.478) 0:51:45.637 ******* 2026-02-02 06:25:50.467748 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4763', 'value': {'gid': 4763, 'name': 'testbed-node-5', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.15:6817/2115684882', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.15:6816', 'nonce': 2115684882}, {'type': 'v1', 'addr': '192.168.16.15:6817', 'nonce': 2115684882}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-02-02 06:25:50.467771 | orchestrator | 2026-02-02 06:25:50.467782 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-02-02 06:25:50.467793 | orchestrator | Monday 02 February 2026 06:25:19 +0000 (0:00:01.188) 0:51:46.825 ******* 2026-02-02 06:25:50.467803 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-02 06:25:50.467814 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-02 06:25:50.467825 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-5) 2026-02-02 06:25:50.467835 | orchestrator | 2026-02-02 06:25:50.467846 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-02-02 06:25:50.467857 | orchestrator | Monday 02 February 2026 06:25:21 +0000 (0:00:01.979) 0:51:48.804 ******* 2026-02-02 06:25:50.467867 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-02-02 06:25:50.467878 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-02-02 06:25:50.467888 | orchestrator | 2026-02-02 06:25:50.467899 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-02-02 06:25:50.467910 | orchestrator | Monday 02 February 2026 06:25:22 +0000 (0:00:01.530) 0:51:50.334 ******* 2026-02-02 06:25:50.467920 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:25:50.467937 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:25:50.467948 | orchestrator | 2026-02-02 06:25:50.467959 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-02-02 06:25:50.467971 | orchestrator | Monday 02 February 2026 06:25:32 +0000 (0:00:09.355) 0:51:59.690 ******* 2026-02-02 06:25:50.467981 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:25:50.467992 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:25:50.468003 | orchestrator | 2026-02-02 06:25:50.468013 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-02-02 06:25:50.468024 | orchestrator | Monday 02 February 2026 06:25:35 +0000 (0:00:03.684) 0:52:03.374 ******* 2026-02-02 06:25:50.468035 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:25:50.468046 | orchestrator | 2026-02-02 06:25:50.468056 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-02-02 06:25:50.468067 | orchestrator | Monday 02 February 2026 06:25:37 +0000 (0:00:02.159) 0:52:05.534 ******* 2026-02-02 06:25:50.468078 | orchestrator | changed: [testbed-node-0] 2026-02-02 06:25:50.468088 | orchestrator | 2026-02-02 06:25:50.468099 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-02-02 06:25:50.468110 | orchestrator | 2026-02-02 06:25:50.468121 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 06:25:50.468131 | orchestrator | Monday 02 February 2026 06:25:39 +0000 (0:00:01.562) 0:52:07.096 ******* 2026-02-02 06:25:50.468142 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-02 06:25:50.468152 | orchestrator | 2026-02-02 06:25:50.468163 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 06:25:50.468174 | orchestrator | Monday 02 February 2026 06:25:40 +0000 (0:00:01.150) 0:52:08.247 ******* 2026-02-02 06:25:50.468191 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:25:50.468202 | orchestrator | 2026-02-02 06:25:50.468221 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 06:25:50.468238 | orchestrator | Monday 02 February 2026 06:25:42 +0000 (0:00:01.435) 0:52:09.683 ******* 2026-02-02 06:25:50.468256 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:25:50.468285 | orchestrator | 2026-02-02 06:25:50.468305 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:25:50.468324 | orchestrator | Monday 02 February 2026 06:25:43 +0000 (0:00:01.097) 0:52:10.781 ******* 2026-02-02 06:25:50.468368 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:25:50.468384 | orchestrator | 2026-02-02 06:25:50.468403 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:25:50.468421 | orchestrator | Monday 02 February 2026 06:25:44 +0000 (0:00:01.447) 0:52:12.228 ******* 2026-02-02 06:25:50.468439 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:25:50.468457 | orchestrator | 2026-02-02 06:25:50.468475 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 06:25:50.468493 | orchestrator | Monday 02 February 2026 06:25:45 +0000 (0:00:01.252) 0:52:13.481 ******* 2026-02-02 06:25:50.468509 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:25:50.468520 | orchestrator | 2026-02-02 06:25:50.468531 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 06:25:50.468542 | orchestrator | Monday 02 February 2026 06:25:47 +0000 (0:00:01.185) 0:52:14.667 ******* 2026-02-02 06:25:50.468552 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:25:50.468563 | orchestrator | 2026-02-02 06:25:50.468574 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 06:25:50.468584 | orchestrator | Monday 02 February 2026 06:25:48 +0000 (0:00:01.150) 0:52:15.817 ******* 2026-02-02 06:25:50.468595 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:25:50.468606 | orchestrator | 2026-02-02 06:25:50.468617 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 06:25:50.468627 | orchestrator | Monday 02 February 2026 06:25:49 +0000 (0:00:01.128) 0:52:16.946 ******* 2026-02-02 06:25:50.468638 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:25:50.468649 | orchestrator | 2026-02-02 06:25:50.468670 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 06:26:15.125576 | orchestrator | Monday 02 February 2026 06:25:50 +0000 (0:00:01.095) 0:52:18.042 ******* 2026-02-02 06:26:15.125690 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:26:15.125710 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:26:15.125724 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:26:15.125736 | orchestrator | 2026-02-02 06:26:15.125750 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 06:26:15.125762 | orchestrator | Monday 02 February 2026 06:25:52 +0000 (0:00:01.677) 0:52:19.719 ******* 2026-02-02 06:26:15.125775 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:26:15.125788 | orchestrator | 2026-02-02 06:26:15.125799 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 06:26:15.125810 | orchestrator | Monday 02 February 2026 06:25:53 +0000 (0:00:01.238) 0:52:20.958 ******* 2026-02-02 06:26:15.125817 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:26:15.125824 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:26:15.125830 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:26:15.125836 | orchestrator | 2026-02-02 06:26:15.125842 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 06:26:15.125848 | orchestrator | Monday 02 February 2026 06:25:56 +0000 (0:00:02.870) 0:52:23.829 ******* 2026-02-02 06:26:15.125872 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-02 06:26:15.125880 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-02 06:26:15.125886 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-02 06:26:15.125901 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:15.125908 | orchestrator | 2026-02-02 06:26:15.125914 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 06:26:15.125920 | orchestrator | Monday 02 February 2026 06:25:57 +0000 (0:00:01.439) 0:52:25.268 ******* 2026-02-02 06:26:15.125929 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 06:26:15.125942 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 06:26:15.125952 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 06:26:15.125963 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:15.125974 | orchestrator | 2026-02-02 06:26:15.125985 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 06:26:15.125996 | orchestrator | Monday 02 February 2026 06:25:59 +0000 (0:00:01.944) 0:52:27.212 ******* 2026-02-02 06:26:15.126010 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:15.126056 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:15.126067 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:15.126077 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:15.126087 | orchestrator | 2026-02-02 06:26:15.126097 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 06:26:15.126126 | orchestrator | Monday 02 February 2026 06:26:00 +0000 (0:00:01.220) 0:52:28.433 ******* 2026-02-02 06:26:15.126162 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 06:25:53.876859', 'end': '2026-02-02 06:25:53.937446', 'delta': '0:00:00.060587', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 06:26:15.126187 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'c530967d0aad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 06:25:54.484663', 'end': '2026-02-02 06:25:54.533696', 'delta': '0:00:00.049033', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c530967d0aad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 06:26:15.126201 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'a68c96a70534', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 06:25:55.044960', 'end': '2026-02-02 06:25:55.097370', 'delta': '0:00:00.052410', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a68c96a70534'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 06:26:15.126209 | orchestrator | 2026-02-02 06:26:15.126217 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 06:26:15.126224 | orchestrator | Monday 02 February 2026 06:26:02 +0000 (0:00:01.200) 0:52:29.634 ******* 2026-02-02 06:26:15.126231 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:26:15.126239 | orchestrator | 2026-02-02 06:26:15.126245 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 06:26:15.126253 | orchestrator | Monday 02 February 2026 06:26:03 +0000 (0:00:01.237) 0:52:30.871 ******* 2026-02-02 06:26:15.126260 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:15.126267 | orchestrator | 2026-02-02 06:26:15.126274 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 06:26:15.126282 | orchestrator | Monday 02 February 2026 06:26:04 +0000 (0:00:01.658) 0:52:32.530 ******* 2026-02-02 06:26:15.126289 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:26:15.126296 | orchestrator | 2026-02-02 06:26:15.126303 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 06:26:15.126310 | orchestrator | Monday 02 February 2026 06:26:06 +0000 (0:00:01.297) 0:52:33.827 ******* 2026-02-02 06:26:15.126317 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:26:15.126323 | orchestrator | 2026-02-02 06:26:15.126329 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:26:15.126335 | orchestrator | Monday 02 February 2026 06:26:08 +0000 (0:00:01.941) 0:52:35.769 ******* 2026-02-02 06:26:15.126379 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:26:15.126385 | orchestrator | 2026-02-02 06:26:15.126392 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 06:26:15.126398 | orchestrator | Monday 02 February 2026 06:26:09 +0000 (0:00:01.158) 0:52:36.927 ******* 2026-02-02 06:26:15.126404 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:15.126410 | orchestrator | 2026-02-02 06:26:15.126416 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 06:26:15.126423 | orchestrator | Monday 02 February 2026 06:26:10 +0000 (0:00:01.091) 0:52:38.019 ******* 2026-02-02 06:26:15.126429 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:15.126435 | orchestrator | 2026-02-02 06:26:15.126441 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:26:15.126447 | orchestrator | Monday 02 February 2026 06:26:11 +0000 (0:00:01.255) 0:52:39.275 ******* 2026-02-02 06:26:15.126458 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:15.126464 | orchestrator | 2026-02-02 06:26:15.126471 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 06:26:15.126477 | orchestrator | Monday 02 February 2026 06:26:12 +0000 (0:00:01.131) 0:52:40.407 ******* 2026-02-02 06:26:15.126483 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:15.126489 | orchestrator | 2026-02-02 06:26:15.126495 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 06:26:15.126501 | orchestrator | Monday 02 February 2026 06:26:13 +0000 (0:00:01.107) 0:52:41.515 ******* 2026-02-02 06:26:15.126513 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:26:20.027894 | orchestrator | 2026-02-02 06:26:20.028001 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 06:26:20.028017 | orchestrator | Monday 02 February 2026 06:26:15 +0000 (0:00:01.182) 0:52:42.698 ******* 2026-02-02 06:26:20.028030 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:20.028043 | orchestrator | 2026-02-02 06:26:20.028054 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 06:26:20.028065 | orchestrator | Monday 02 February 2026 06:26:16 +0000 (0:00:01.158) 0:52:43.856 ******* 2026-02-02 06:26:20.028084 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:26:20.028103 | orchestrator | 2026-02-02 06:26:20.028121 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 06:26:20.028139 | orchestrator | Monday 02 February 2026 06:26:17 +0000 (0:00:01.166) 0:52:45.023 ******* 2026-02-02 06:26:20.028157 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:20.028208 | orchestrator | 2026-02-02 06:26:20.028221 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 06:26:20.028233 | orchestrator | Monday 02 February 2026 06:26:18 +0000 (0:00:01.166) 0:52:46.189 ******* 2026-02-02 06:26:20.028253 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:26:20.028271 | orchestrator | 2026-02-02 06:26:20.028290 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 06:26:20.028334 | orchestrator | Monday 02 February 2026 06:26:19 +0000 (0:00:01.146) 0:52:47.336 ******* 2026-02-02 06:26:20.028397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:26:20.028418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6', 'dm-uuid-LVM-o4NjfQidgd0d8Dt2ERSF2CVjMcc1iNdF2FL70XUBfeOz8qjNKOcDK13w6fcJ9Hta'], 'uuids': ['7d002011-c2d2-4478-8516-4cfbbdeaec0b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta']}})  2026-02-02 06:26:20.028436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359', 'scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e969e129', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:26:20.028482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qjdzC2-uhmD-TpwQ-o3eu-AERk-xIpn-IuLEqz', 'scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40', 'scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f']}})  2026-02-02 06:26:20.028504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:26:20.028549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:26:20.028574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 06:26:20.028592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:26:20.028614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs', 'dm-uuid-CRYPT-LUKS2-756889fb99344894803ed86e669bebbd-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:26:20.028627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:26:20.028641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f', 'dm-uuid-LVM-oyVS0lpzZeiZxxmfRvad67kbexmRBG5IWJAtRWtNBygZ9yUEjcaaQoSOl1TBvsQs'], 'uuids': ['756889fb-9934-4894-803e-d86e669bebbd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs']}})  2026-02-02 06:26:20.028672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-etyEN7-O4pu-QliJ-NKxv-0HLx-jIcx-JGZ0d7', 'scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b', 'scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6']}})  2026-02-02 06:26:20.028695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:26:21.873047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2a7e3dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:26:21.873154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:26:21.873197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:26:21.873212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta', 'dm-uuid-CRYPT-LUKS2-7d002011c2d2447885164cfbbdeaec0b-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:26:21.873227 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:21.873240 | orchestrator | 2026-02-02 06:26:21.873252 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 06:26:21.873264 | orchestrator | Monday 02 February 2026 06:26:21 +0000 (0:00:01.482) 0:52:48.819 ******* 2026-02-02 06:26:21.873296 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:21.873317 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6', 'dm-uuid-LVM-o4NjfQidgd0d8Dt2ERSF2CVjMcc1iNdF2FL70XUBfeOz8qjNKOcDK13w6fcJ9Hta'], 'uuids': ['7d002011-c2d2-4478-8516-4cfbbdeaec0b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:21.873330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359', 'scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e969e129', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:21.873419 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qjdzC2-uhmD-TpwQ-o3eu-AERk-xIpn-IuLEqz', 'scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40', 'scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:21.873435 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:21.873456 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:23.107787 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:23.107914 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:23.107932 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs', 'dm-uuid-CRYPT-LUKS2-756889fb99344894803ed86e669bebbd-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:23.107963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:23.107974 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f', 'dm-uuid-LVM-oyVS0lpzZeiZxxmfRvad67kbexmRBG5IWJAtRWtNBygZ9yUEjcaaQoSOl1TBvsQs'], 'uuids': ['756889fb-9934-4894-803e-d86e669bebbd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:23.108004 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-etyEN7-O4pu-QliJ-NKxv-0HLx-jIcx-JGZ0d7', 'scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b', 'scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:23.108021 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:23.108032 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2a7e3dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:23.108048 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:23.108063 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:57.001925 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta', 'dm-uuid-CRYPT-LUKS2-7d002011c2d2447885164cfbbdeaec0b-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:26:57.002162 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:57.002228 | orchestrator | 2026-02-02 06:26:57.002253 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 06:26:57.002274 | orchestrator | Monday 02 February 2026 06:26:23 +0000 (0:00:01.861) 0:52:50.681 ******* 2026-02-02 06:26:57.002294 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:26:57.002315 | orchestrator | 2026-02-02 06:26:57.002334 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 06:26:57.002355 | orchestrator | Monday 02 February 2026 06:26:24 +0000 (0:00:01.483) 0:52:52.164 ******* 2026-02-02 06:26:57.002429 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:26:57.002451 | orchestrator | 2026-02-02 06:26:57.002474 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:26:57.002496 | orchestrator | Monday 02 February 2026 06:26:25 +0000 (0:00:01.105) 0:52:53.270 ******* 2026-02-02 06:26:57.002517 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:26:57.002537 | orchestrator | 2026-02-02 06:26:57.002558 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:26:57.002580 | orchestrator | Monday 02 February 2026 06:26:27 +0000 (0:00:01.427) 0:52:54.698 ******* 2026-02-02 06:26:57.002601 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:57.002623 | orchestrator | 2026-02-02 06:26:57.002644 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:26:57.002664 | orchestrator | Monday 02 February 2026 06:26:28 +0000 (0:00:01.112) 0:52:55.811 ******* 2026-02-02 06:26:57.002684 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:57.002705 | orchestrator | 2026-02-02 06:26:57.002728 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:26:57.002748 | orchestrator | Monday 02 February 2026 06:26:29 +0000 (0:00:01.231) 0:52:57.042 ******* 2026-02-02 06:26:57.002769 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:57.002790 | orchestrator | 2026-02-02 06:26:57.002811 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 06:26:57.002832 | orchestrator | Monday 02 February 2026 06:26:30 +0000 (0:00:01.173) 0:52:58.215 ******* 2026-02-02 06:26:57.002853 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-02 06:26:57.002873 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-02 06:26:57.002892 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-02 06:26:57.002910 | orchestrator | 2026-02-02 06:26:57.002929 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 06:26:57.002948 | orchestrator | Monday 02 February 2026 06:26:32 +0000 (0:00:01.709) 0:52:59.925 ******* 2026-02-02 06:26:57.002970 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-02 06:26:57.002991 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-02 06:26:57.003011 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-02 06:26:57.003030 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:57.003050 | orchestrator | 2026-02-02 06:26:57.003070 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 06:26:57.003091 | orchestrator | Monday 02 February 2026 06:26:33 +0000 (0:00:01.170) 0:53:01.096 ******* 2026-02-02 06:26:57.003112 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-02 06:26:57.003127 | orchestrator | 2026-02-02 06:26:57.003143 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:26:57.003162 | orchestrator | Monday 02 February 2026 06:26:34 +0000 (0:00:01.118) 0:53:02.215 ******* 2026-02-02 06:26:57.003180 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:57.003198 | orchestrator | 2026-02-02 06:26:57.003217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:26:57.003236 | orchestrator | Monday 02 February 2026 06:26:35 +0000 (0:00:01.167) 0:53:03.383 ******* 2026-02-02 06:26:57.003254 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:57.003271 | orchestrator | 2026-02-02 06:26:57.003305 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:26:57.003326 | orchestrator | Monday 02 February 2026 06:26:36 +0000 (0:00:01.164) 0:53:04.547 ******* 2026-02-02 06:26:57.003344 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:57.003384 | orchestrator | 2026-02-02 06:26:57.003405 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:26:57.003423 | orchestrator | Monday 02 February 2026 06:26:38 +0000 (0:00:01.168) 0:53:05.716 ******* 2026-02-02 06:26:57.003441 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:26:57.003459 | orchestrator | 2026-02-02 06:26:57.003477 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:26:57.003496 | orchestrator | Monday 02 February 2026 06:26:39 +0000 (0:00:01.215) 0:53:06.932 ******* 2026-02-02 06:26:57.003516 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:26:57.003560 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:26:57.003581 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:26:57.003592 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:57.003603 | orchestrator | 2026-02-02 06:26:57.003614 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:26:57.003625 | orchestrator | Monday 02 February 2026 06:26:40 +0000 (0:00:01.413) 0:53:08.345 ******* 2026-02-02 06:26:57.003636 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:26:57.003646 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:26:57.003658 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:26:57.003668 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:57.003679 | orchestrator | 2026-02-02 06:26:57.003690 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:26:57.003700 | orchestrator | Monday 02 February 2026 06:26:42 +0000 (0:00:01.407) 0:53:09.753 ******* 2026-02-02 06:26:57.003711 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:26:57.003722 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:26:57.003733 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:26:57.003743 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:57.003754 | orchestrator | 2026-02-02 06:26:57.003765 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:26:57.003775 | orchestrator | Monday 02 February 2026 06:26:43 +0000 (0:00:01.365) 0:53:11.119 ******* 2026-02-02 06:26:57.003786 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:26:57.003797 | orchestrator | 2026-02-02 06:26:57.003807 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:26:57.003818 | orchestrator | Monday 02 February 2026 06:26:44 +0000 (0:00:01.122) 0:53:12.242 ******* 2026-02-02 06:26:57.003828 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-02 06:26:57.003839 | orchestrator | 2026-02-02 06:26:57.003850 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 06:26:57.003861 | orchestrator | Monday 02 February 2026 06:26:46 +0000 (0:00:01.355) 0:53:13.597 ******* 2026-02-02 06:26:57.003872 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:26:57.003882 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:26:57.003893 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:26:57.003903 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:26:57.003914 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:26:57.003925 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-02 06:26:57.003935 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:26:57.003946 | orchestrator | 2026-02-02 06:26:57.003967 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 06:26:57.003977 | orchestrator | Monday 02 February 2026 06:26:48 +0000 (0:00:02.156) 0:53:15.754 ******* 2026-02-02 06:26:57.003994 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:26:57.004012 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:26:57.004031 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:26:57.004050 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:26:57.004065 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:26:57.004083 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-02 06:26:57.004100 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:26:57.004117 | orchestrator | 2026-02-02 06:26:57.004132 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-02-02 06:26:57.004149 | orchestrator | Monday 02 February 2026 06:26:50 +0000 (0:00:02.664) 0:53:18.419 ******* 2026-02-02 06:26:57.004166 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:57.004184 | orchestrator | 2026-02-02 06:26:57.004200 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 06:26:57.004216 | orchestrator | Monday 02 February 2026 06:26:51 +0000 (0:00:01.132) 0:53:19.551 ******* 2026-02-02 06:26:57.004232 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-02 06:26:57.004248 | orchestrator | 2026-02-02 06:26:57.004501 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 06:26:57.004531 | orchestrator | Monday 02 February 2026 06:26:53 +0000 (0:00:01.092) 0:53:20.644 ******* 2026-02-02 06:26:57.004547 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-02 06:26:57.004568 | orchestrator | 2026-02-02 06:26:57.004588 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 06:26:57.004604 | orchestrator | Monday 02 February 2026 06:26:54 +0000 (0:00:01.256) 0:53:21.901 ******* 2026-02-02 06:26:57.004619 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:26:57.004634 | orchestrator | 2026-02-02 06:26:57.004649 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 06:26:57.004662 | orchestrator | Monday 02 February 2026 06:26:55 +0000 (0:00:01.124) 0:53:23.025 ******* 2026-02-02 06:26:57.004677 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:26:57.004691 | orchestrator | 2026-02-02 06:26:57.004707 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 06:26:57.004741 | orchestrator | Monday 02 February 2026 06:26:56 +0000 (0:00:01.543) 0:53:24.568 ******* 2026-02-02 06:27:47.128979 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:27:47.129096 | orchestrator | 2026-02-02 06:27:47.129113 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 06:27:47.129126 | orchestrator | Monday 02 February 2026 06:26:58 +0000 (0:00:01.521) 0:53:26.090 ******* 2026-02-02 06:27:47.129137 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:27:47.129148 | orchestrator | 2026-02-02 06:27:47.129159 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 06:27:47.129170 | orchestrator | Monday 02 February 2026 06:27:00 +0000 (0:00:01.599) 0:53:27.690 ******* 2026-02-02 06:27:47.129181 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.129192 | orchestrator | 2026-02-02 06:27:47.129220 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 06:27:47.129232 | orchestrator | Monday 02 February 2026 06:27:01 +0000 (0:00:01.115) 0:53:28.806 ******* 2026-02-02 06:27:47.129243 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.129254 | orchestrator | 2026-02-02 06:27:47.129265 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 06:27:47.129297 | orchestrator | Monday 02 February 2026 06:27:02 +0000 (0:00:01.102) 0:53:29.908 ******* 2026-02-02 06:27:47.129308 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.129320 | orchestrator | 2026-02-02 06:27:47.129332 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 06:27:47.129343 | orchestrator | Monday 02 February 2026 06:27:03 +0000 (0:00:01.118) 0:53:31.027 ******* 2026-02-02 06:27:47.129353 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:27:47.129364 | orchestrator | 2026-02-02 06:27:47.129374 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 06:27:47.129385 | orchestrator | Monday 02 February 2026 06:27:04 +0000 (0:00:01.547) 0:53:32.575 ******* 2026-02-02 06:27:47.129430 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:27:47.129442 | orchestrator | 2026-02-02 06:27:47.129453 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 06:27:47.129464 | orchestrator | Monday 02 February 2026 06:27:06 +0000 (0:00:01.535) 0:53:34.110 ******* 2026-02-02 06:27:47.129474 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.129485 | orchestrator | 2026-02-02 06:27:47.129499 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 06:27:47.129512 | orchestrator | Monday 02 February 2026 06:27:07 +0000 (0:00:01.097) 0:53:35.207 ******* 2026-02-02 06:27:47.129524 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.129536 | orchestrator | 2026-02-02 06:27:47.129548 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 06:27:47.129560 | orchestrator | Monday 02 February 2026 06:27:08 +0000 (0:00:01.143) 0:53:36.351 ******* 2026-02-02 06:27:47.129644 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:27:47.129660 | orchestrator | 2026-02-02 06:27:47.129673 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 06:27:47.129684 | orchestrator | Monday 02 February 2026 06:27:09 +0000 (0:00:01.212) 0:53:37.564 ******* 2026-02-02 06:27:47.129694 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:27:47.129705 | orchestrator | 2026-02-02 06:27:47.129716 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 06:27:47.129727 | orchestrator | Monday 02 February 2026 06:27:11 +0000 (0:00:01.198) 0:53:38.763 ******* 2026-02-02 06:27:47.129738 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:27:47.129749 | orchestrator | 2026-02-02 06:27:47.129760 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 06:27:47.129770 | orchestrator | Monday 02 February 2026 06:27:12 +0000 (0:00:01.163) 0:53:39.926 ******* 2026-02-02 06:27:47.129781 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.129792 | orchestrator | 2026-02-02 06:27:47.129803 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 06:27:47.129814 | orchestrator | Monday 02 February 2026 06:27:13 +0000 (0:00:01.146) 0:53:41.073 ******* 2026-02-02 06:27:47.129824 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.129835 | orchestrator | 2026-02-02 06:27:47.129846 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 06:27:47.129856 | orchestrator | Monday 02 February 2026 06:27:14 +0000 (0:00:01.150) 0:53:42.223 ******* 2026-02-02 06:27:47.129867 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.129878 | orchestrator | 2026-02-02 06:27:47.129889 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 06:27:47.129899 | orchestrator | Monday 02 February 2026 06:27:15 +0000 (0:00:01.120) 0:53:43.343 ******* 2026-02-02 06:27:47.129910 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:27:47.129921 | orchestrator | 2026-02-02 06:27:47.129932 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 06:27:47.129942 | orchestrator | Monday 02 February 2026 06:27:16 +0000 (0:00:01.116) 0:53:44.459 ******* 2026-02-02 06:27:47.129953 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:27:47.129964 | orchestrator | 2026-02-02 06:27:47.129974 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 06:27:47.129994 | orchestrator | Monday 02 February 2026 06:27:18 +0000 (0:00:01.153) 0:53:45.612 ******* 2026-02-02 06:27:47.130005 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.130087 | orchestrator | 2026-02-02 06:27:47.130120 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 06:27:47.130141 | orchestrator | Monday 02 February 2026 06:27:19 +0000 (0:00:01.130) 0:53:46.744 ******* 2026-02-02 06:27:47.130159 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.130179 | orchestrator | 2026-02-02 06:27:47.130199 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 06:27:47.130219 | orchestrator | Monday 02 February 2026 06:27:20 +0000 (0:00:01.149) 0:53:47.893 ******* 2026-02-02 06:27:47.130240 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.130260 | orchestrator | 2026-02-02 06:27:47.130281 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 06:27:47.130301 | orchestrator | Monday 02 February 2026 06:27:21 +0000 (0:00:01.196) 0:53:49.090 ******* 2026-02-02 06:27:47.130321 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.130341 | orchestrator | 2026-02-02 06:27:47.130361 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 06:27:47.130439 | orchestrator | Monday 02 February 2026 06:27:22 +0000 (0:00:01.133) 0:53:50.224 ******* 2026-02-02 06:27:47.130462 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.130482 | orchestrator | 2026-02-02 06:27:47.130501 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 06:27:47.130520 | orchestrator | Monday 02 February 2026 06:27:23 +0000 (0:00:01.085) 0:53:51.309 ******* 2026-02-02 06:27:47.130540 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.130561 | orchestrator | 2026-02-02 06:27:47.130581 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 06:27:47.130612 | orchestrator | Monday 02 February 2026 06:27:24 +0000 (0:00:01.173) 0:53:52.482 ******* 2026-02-02 06:27:47.130634 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.130654 | orchestrator | 2026-02-02 06:27:47.130674 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 06:27:47.130695 | orchestrator | Monday 02 February 2026 06:27:26 +0000 (0:00:01.180) 0:53:53.663 ******* 2026-02-02 06:27:47.130716 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.130735 | orchestrator | 2026-02-02 06:27:47.130756 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 06:27:47.130776 | orchestrator | Monday 02 February 2026 06:27:27 +0000 (0:00:01.145) 0:53:54.809 ******* 2026-02-02 06:27:47.130795 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.130816 | orchestrator | 2026-02-02 06:27:47.130833 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 06:27:47.130851 | orchestrator | Monday 02 February 2026 06:27:28 +0000 (0:00:01.163) 0:53:55.973 ******* 2026-02-02 06:27:47.130871 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.130891 | orchestrator | 2026-02-02 06:27:47.130910 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 06:27:47.130930 | orchestrator | Monday 02 February 2026 06:27:29 +0000 (0:00:01.103) 0:53:57.077 ******* 2026-02-02 06:27:47.130950 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.130971 | orchestrator | 2026-02-02 06:27:47.130992 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 06:27:47.131012 | orchestrator | Monday 02 February 2026 06:27:30 +0000 (0:00:01.087) 0:53:58.164 ******* 2026-02-02 06:27:47.131031 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.131071 | orchestrator | 2026-02-02 06:27:47.131105 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 06:27:47.131118 | orchestrator | Monday 02 February 2026 06:27:31 +0000 (0:00:01.101) 0:53:59.265 ******* 2026-02-02 06:27:47.131128 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:27:47.131139 | orchestrator | 2026-02-02 06:27:47.131163 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 06:27:47.131195 | orchestrator | Monday 02 February 2026 06:27:33 +0000 (0:00:01.898) 0:54:01.164 ******* 2026-02-02 06:27:47.131206 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:27:47.131217 | orchestrator | 2026-02-02 06:27:47.131227 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 06:27:47.131238 | orchestrator | Monday 02 February 2026 06:27:35 +0000 (0:00:02.210) 0:54:03.374 ******* 2026-02-02 06:27:47.131249 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-02 06:27:47.131260 | orchestrator | 2026-02-02 06:27:47.131271 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 06:27:47.131282 | orchestrator | Monday 02 February 2026 06:27:36 +0000 (0:00:01.096) 0:54:04.471 ******* 2026-02-02 06:27:47.131292 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.131303 | orchestrator | 2026-02-02 06:27:47.131313 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 06:27:47.131324 | orchestrator | Monday 02 February 2026 06:27:38 +0000 (0:00:01.156) 0:54:05.628 ******* 2026-02-02 06:27:47.131334 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.131345 | orchestrator | 2026-02-02 06:27:47.131356 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 06:27:47.131366 | orchestrator | Monday 02 February 2026 06:27:39 +0000 (0:00:01.241) 0:54:06.869 ******* 2026-02-02 06:27:47.131377 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 06:27:47.131387 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 06:27:47.131425 | orchestrator | 2026-02-02 06:27:47.131436 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 06:27:47.131447 | orchestrator | Monday 02 February 2026 06:27:41 +0000 (0:00:01.772) 0:54:08.642 ******* 2026-02-02 06:27:47.131458 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:27:47.131468 | orchestrator | 2026-02-02 06:27:47.131486 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 06:27:47.131504 | orchestrator | Monday 02 February 2026 06:27:42 +0000 (0:00:01.420) 0:54:10.063 ******* 2026-02-02 06:27:47.131521 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.131540 | orchestrator | 2026-02-02 06:27:47.131559 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 06:27:47.131577 | orchestrator | Monday 02 February 2026 06:27:43 +0000 (0:00:01.202) 0:54:11.266 ******* 2026-02-02 06:27:47.131588 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.131599 | orchestrator | 2026-02-02 06:27:47.131610 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 06:27:47.131620 | orchestrator | Monday 02 February 2026 06:27:44 +0000 (0:00:01.124) 0:54:12.390 ******* 2026-02-02 06:27:47.131631 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:27:47.131642 | orchestrator | 2026-02-02 06:27:47.131653 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 06:27:47.131663 | orchestrator | Monday 02 February 2026 06:27:45 +0000 (0:00:01.174) 0:54:13.565 ******* 2026-02-02 06:27:47.131674 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-02 06:27:47.131684 | orchestrator | 2026-02-02 06:27:47.131695 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 06:27:47.131716 | orchestrator | Monday 02 February 2026 06:27:47 +0000 (0:00:01.131) 0:54:14.696 ******* 2026-02-02 06:28:33.322675 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:28:33.322799 | orchestrator | 2026-02-02 06:28:33.322822 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 06:28:33.322839 | orchestrator | Monday 02 February 2026 06:27:48 +0000 (0:00:01.748) 0:54:16.445 ******* 2026-02-02 06:28:33.322856 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 06:28:33.322872 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 06:28:33.322928 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 06:28:33.322945 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.322962 | orchestrator | 2026-02-02 06:28:33.322977 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 06:28:33.322992 | orchestrator | Monday 02 February 2026 06:27:50 +0000 (0:00:01.171) 0:54:17.616 ******* 2026-02-02 06:28:33.323006 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.323020 | orchestrator | 2026-02-02 06:28:33.323034 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 06:28:33.323048 | orchestrator | Monday 02 February 2026 06:27:51 +0000 (0:00:01.150) 0:54:18.767 ******* 2026-02-02 06:28:33.323062 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.323074 | orchestrator | 2026-02-02 06:28:33.323084 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 06:28:33.323099 | orchestrator | Monday 02 February 2026 06:27:52 +0000 (0:00:01.197) 0:54:19.964 ******* 2026-02-02 06:28:33.323111 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.323136 | orchestrator | 2026-02-02 06:28:33.323150 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 06:28:33.323164 | orchestrator | Monday 02 February 2026 06:27:53 +0000 (0:00:01.166) 0:54:21.131 ******* 2026-02-02 06:28:33.323179 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.323193 | orchestrator | 2026-02-02 06:28:33.323208 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 06:28:33.323223 | orchestrator | Monday 02 February 2026 06:27:54 +0000 (0:00:01.169) 0:54:22.300 ******* 2026-02-02 06:28:33.323238 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.323251 | orchestrator | 2026-02-02 06:28:33.323266 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 06:28:33.323281 | orchestrator | Monday 02 February 2026 06:27:55 +0000 (0:00:01.131) 0:54:23.431 ******* 2026-02-02 06:28:33.323296 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:28:33.323310 | orchestrator | 2026-02-02 06:28:33.323324 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 06:28:33.323339 | orchestrator | Monday 02 February 2026 06:27:58 +0000 (0:00:02.425) 0:54:25.857 ******* 2026-02-02 06:28:33.323353 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:28:33.323367 | orchestrator | 2026-02-02 06:28:33.323382 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 06:28:33.323397 | orchestrator | Monday 02 February 2026 06:27:59 +0000 (0:00:01.123) 0:54:26.981 ******* 2026-02-02 06:28:33.323413 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-02 06:28:33.323458 | orchestrator | 2026-02-02 06:28:33.323474 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 06:28:33.323489 | orchestrator | Monday 02 February 2026 06:28:00 +0000 (0:00:01.129) 0:54:28.110 ******* 2026-02-02 06:28:33.323504 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.323519 | orchestrator | 2026-02-02 06:28:33.323536 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 06:28:33.323552 | orchestrator | Monday 02 February 2026 06:28:01 +0000 (0:00:01.147) 0:54:29.258 ******* 2026-02-02 06:28:33.323570 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.323586 | orchestrator | 2026-02-02 06:28:33.323601 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 06:28:33.323617 | orchestrator | Monday 02 February 2026 06:28:02 +0000 (0:00:01.178) 0:54:30.436 ******* 2026-02-02 06:28:33.323631 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.323645 | orchestrator | 2026-02-02 06:28:33.323660 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 06:28:33.323675 | orchestrator | Monday 02 February 2026 06:28:03 +0000 (0:00:01.117) 0:54:31.553 ******* 2026-02-02 06:28:33.323689 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.323718 | orchestrator | 2026-02-02 06:28:33.323733 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 06:28:33.323749 | orchestrator | Monday 02 February 2026 06:28:05 +0000 (0:00:01.122) 0:54:32.676 ******* 2026-02-02 06:28:33.323763 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.323779 | orchestrator | 2026-02-02 06:28:33.323793 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 06:28:33.323809 | orchestrator | Monday 02 February 2026 06:28:06 +0000 (0:00:01.122) 0:54:33.799 ******* 2026-02-02 06:28:33.323824 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.323839 | orchestrator | 2026-02-02 06:28:33.323855 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 06:28:33.323870 | orchestrator | Monday 02 February 2026 06:28:07 +0000 (0:00:01.129) 0:54:34.928 ******* 2026-02-02 06:28:33.323886 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.323901 | orchestrator | 2026-02-02 06:28:33.323917 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 06:28:33.323934 | orchestrator | Monday 02 February 2026 06:28:08 +0000 (0:00:01.118) 0:54:36.046 ******* 2026-02-02 06:28:33.323949 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.323965 | orchestrator | 2026-02-02 06:28:33.323981 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 06:28:33.323996 | orchestrator | Monday 02 February 2026 06:28:09 +0000 (0:00:01.196) 0:54:37.243 ******* 2026-02-02 06:28:33.324078 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:28:33.324095 | orchestrator | 2026-02-02 06:28:33.324110 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 06:28:33.324150 | orchestrator | Monday 02 February 2026 06:28:10 +0000 (0:00:01.158) 0:54:38.402 ******* 2026-02-02 06:28:33.324169 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-02 06:28:33.324186 | orchestrator | 2026-02-02 06:28:33.324202 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 06:28:33.324218 | orchestrator | Monday 02 February 2026 06:28:12 +0000 (0:00:01.184) 0:54:39.586 ******* 2026-02-02 06:28:33.324234 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-02 06:28:33.324262 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-02 06:28:33.324277 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-02 06:28:33.324293 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-02 06:28:33.324308 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-02 06:28:33.324323 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-02 06:28:33.324339 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-02 06:28:33.324354 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-02 06:28:33.324371 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 06:28:33.324386 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 06:28:33.324404 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 06:28:33.324419 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 06:28:33.324511 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 06:28:33.324522 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 06:28:33.324531 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-02 06:28:33.324539 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-02 06:28:33.324548 | orchestrator | 2026-02-02 06:28:33.324557 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 06:28:33.324565 | orchestrator | Monday 02 February 2026 06:28:18 +0000 (0:00:06.530) 0:54:46.117 ******* 2026-02-02 06:28:33.324574 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-02 06:28:33.324582 | orchestrator | 2026-02-02 06:28:33.324602 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-02 06:28:33.324611 | orchestrator | Monday 02 February 2026 06:28:19 +0000 (0:00:01.127) 0:54:47.245 ******* 2026-02-02 06:28:33.324620 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 06:28:33.324629 | orchestrator | 2026-02-02 06:28:33.324638 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-02 06:28:33.324845 | orchestrator | Monday 02 February 2026 06:28:21 +0000 (0:00:01.501) 0:54:48.746 ******* 2026-02-02 06:28:33.324863 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 06:28:33.324877 | orchestrator | 2026-02-02 06:28:33.324892 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 06:28:33.324904 | orchestrator | Monday 02 February 2026 06:28:23 +0000 (0:00:01.933) 0:54:50.680 ******* 2026-02-02 06:28:33.324918 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.324933 | orchestrator | 2026-02-02 06:28:33.324948 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 06:28:33.324961 | orchestrator | Monday 02 February 2026 06:28:24 +0000 (0:00:01.102) 0:54:51.783 ******* 2026-02-02 06:28:33.324975 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.324989 | orchestrator | 2026-02-02 06:28:33.325001 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 06:28:33.325014 | orchestrator | Monday 02 February 2026 06:28:25 +0000 (0:00:01.141) 0:54:52.924 ******* 2026-02-02 06:28:33.325023 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.325031 | orchestrator | 2026-02-02 06:28:33.325039 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 06:28:33.325047 | orchestrator | Monday 02 February 2026 06:28:26 +0000 (0:00:01.116) 0:54:54.041 ******* 2026-02-02 06:28:33.325055 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.325062 | orchestrator | 2026-02-02 06:28:33.325070 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 06:28:33.325078 | orchestrator | Monday 02 February 2026 06:28:27 +0000 (0:00:01.158) 0:54:55.200 ******* 2026-02-02 06:28:33.325085 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.325093 | orchestrator | 2026-02-02 06:28:33.325101 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 06:28:33.325109 | orchestrator | Monday 02 February 2026 06:28:28 +0000 (0:00:01.199) 0:54:56.400 ******* 2026-02-02 06:28:33.325117 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.325125 | orchestrator | 2026-02-02 06:28:33.325132 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 06:28:33.325140 | orchestrator | Monday 02 February 2026 06:28:29 +0000 (0:00:01.112) 0:54:57.513 ******* 2026-02-02 06:28:33.325148 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.325156 | orchestrator | 2026-02-02 06:28:33.325163 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 06:28:33.325171 | orchestrator | Monday 02 February 2026 06:28:31 +0000 (0:00:01.109) 0:54:58.622 ******* 2026-02-02 06:28:33.325179 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.325187 | orchestrator | 2026-02-02 06:28:33.325195 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 06:28:33.325203 | orchestrator | Monday 02 February 2026 06:28:32 +0000 (0:00:01.145) 0:54:59.768 ******* 2026-02-02 06:28:33.325210 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:28:33.325218 | orchestrator | 2026-02-02 06:28:33.325239 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 06:29:28.083848 | orchestrator | Monday 02 February 2026 06:28:33 +0000 (0:00:01.122) 0:55:00.891 ******* 2026-02-02 06:29:28.083965 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:29:28.083981 | orchestrator | 2026-02-02 06:29:28.084017 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 06:29:28.084029 | orchestrator | Monday 02 February 2026 06:28:34 +0000 (0:00:01.135) 0:55:02.027 ******* 2026-02-02 06:29:28.084040 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:29:28.084050 | orchestrator | 2026-02-02 06:29:28.084075 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 06:29:28.084086 | orchestrator | Monday 02 February 2026 06:28:35 +0000 (0:00:01.117) 0:55:03.145 ******* 2026-02-02 06:29:28.084097 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-02 06:29:28.084108 | orchestrator | 2026-02-02 06:29:28.084120 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 06:29:28.084131 | orchestrator | Monday 02 February 2026 06:28:39 +0000 (0:00:04.383) 0:55:07.529 ******* 2026-02-02 06:29:28.084142 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 06:29:28.084155 | orchestrator | 2026-02-02 06:29:28.084166 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 06:29:28.084177 | orchestrator | Monday 02 February 2026 06:28:41 +0000 (0:00:01.241) 0:55:08.771 ******* 2026-02-02 06:29:28.084189 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-02 06:29:28.084203 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-02 06:29:28.084215 | orchestrator | 2026-02-02 06:29:28.084226 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 06:29:28.084237 | orchestrator | Monday 02 February 2026 06:28:45 +0000 (0:00:04.771) 0:55:13.542 ******* 2026-02-02 06:29:28.084248 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:29:28.084258 | orchestrator | 2026-02-02 06:29:28.084269 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 06:29:28.084280 | orchestrator | Monday 02 February 2026 06:28:47 +0000 (0:00:01.174) 0:55:14.717 ******* 2026-02-02 06:29:28.084291 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:29:28.084302 | orchestrator | 2026-02-02 06:29:28.084313 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:29:28.084323 | orchestrator | Monday 02 February 2026 06:28:48 +0000 (0:00:01.122) 0:55:15.839 ******* 2026-02-02 06:29:28.084334 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:29:28.084345 | orchestrator | 2026-02-02 06:29:28.084356 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:29:28.084366 | orchestrator | Monday 02 February 2026 06:28:49 +0000 (0:00:01.136) 0:55:16.976 ******* 2026-02-02 06:29:28.084377 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:29:28.084390 | orchestrator | 2026-02-02 06:29:28.084403 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:29:28.084416 | orchestrator | Monday 02 February 2026 06:28:50 +0000 (0:00:01.258) 0:55:18.234 ******* 2026-02-02 06:29:28.084435 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:29:28.084456 | orchestrator | 2026-02-02 06:29:28.084505 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:29:28.084527 | orchestrator | Monday 02 February 2026 06:28:51 +0000 (0:00:01.206) 0:55:19.441 ******* 2026-02-02 06:29:28.084546 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:29:28.084568 | orchestrator | 2026-02-02 06:29:28.084589 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:29:28.084621 | orchestrator | Monday 02 February 2026 06:28:53 +0000 (0:00:01.243) 0:55:20.685 ******* 2026-02-02 06:29:28.084634 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:29:28.084645 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:29:28.084656 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:29:28.084667 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:29:28.084677 | orchestrator | 2026-02-02 06:29:28.084689 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:29:28.084699 | orchestrator | Monday 02 February 2026 06:28:54 +0000 (0:00:01.370) 0:55:22.055 ******* 2026-02-02 06:29:28.084710 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:29:28.084720 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:29:28.084731 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:29:28.084742 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:29:28.084752 | orchestrator | 2026-02-02 06:29:28.084763 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:29:28.084774 | orchestrator | Monday 02 February 2026 06:28:55 +0000 (0:00:01.378) 0:55:23.433 ******* 2026-02-02 06:29:28.084785 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:29:28.084795 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:29:28.084806 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:29:28.084832 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:29:28.084843 | orchestrator | 2026-02-02 06:29:28.084854 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:29:28.084865 | orchestrator | Monday 02 February 2026 06:28:57 +0000 (0:00:01.385) 0:55:24.819 ******* 2026-02-02 06:29:28.084875 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:29:28.084886 | orchestrator | 2026-02-02 06:29:28.084897 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:29:28.084914 | orchestrator | Monday 02 February 2026 06:28:58 +0000 (0:00:01.141) 0:55:25.961 ******* 2026-02-02 06:29:28.084925 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-02 06:29:28.084935 | orchestrator | 2026-02-02 06:29:28.084946 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 06:29:28.084957 | orchestrator | Monday 02 February 2026 06:28:59 +0000 (0:00:01.383) 0:55:27.345 ******* 2026-02-02 06:29:28.084967 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:29:28.084978 | orchestrator | 2026-02-02 06:29:28.084989 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-02 06:29:28.084999 | orchestrator | Monday 02 February 2026 06:29:01 +0000 (0:00:01.712) 0:55:29.058 ******* 2026-02-02 06:29:28.085010 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:29:28.085020 | orchestrator | 2026-02-02 06:29:28.085031 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-02 06:29:28.085042 | orchestrator | Monday 02 February 2026 06:29:02 +0000 (0:00:01.114) 0:55:30.172 ******* 2026-02-02 06:29:28.085053 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5 2026-02-02 06:29:28.085063 | orchestrator | 2026-02-02 06:29:28.085074 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-02 06:29:28.085085 | orchestrator | Monday 02 February 2026 06:29:04 +0000 (0:00:01.594) 0:55:31.767 ******* 2026-02-02 06:29:28.085095 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-02 06:29:28.085106 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-02 06:29:28.085117 | orchestrator | 2026-02-02 06:29:28.085128 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-02 06:29:28.085138 | orchestrator | Monday 02 February 2026 06:29:06 +0000 (0:00:01.848) 0:55:33.615 ******* 2026-02-02 06:29:28.085149 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:29:28.085167 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-02 06:29:28.085178 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 06:29:28.085188 | orchestrator | 2026-02-02 06:29:28.085199 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:29:28.085210 | orchestrator | Monday 02 February 2026 06:29:09 +0000 (0:00:03.110) 0:55:36.726 ******* 2026-02-02 06:29:28.085221 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-02 06:29:28.085231 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-02 06:29:28.085242 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:29:28.085253 | orchestrator | 2026-02-02 06:29:28.085264 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-02 06:29:28.085274 | orchestrator | Monday 02 February 2026 06:29:11 +0000 (0:00:01.921) 0:55:38.647 ******* 2026-02-02 06:29:28.085285 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:29:28.085295 | orchestrator | 2026-02-02 06:29:28.085306 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-02 06:29:28.085317 | orchestrator | Monday 02 February 2026 06:29:12 +0000 (0:00:01.506) 0:55:40.154 ******* 2026-02-02 06:29:28.085327 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:29:28.085338 | orchestrator | 2026-02-02 06:29:28.085349 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-02 06:29:28.085359 | orchestrator | Monday 02 February 2026 06:29:13 +0000 (0:00:01.126) 0:55:41.281 ******* 2026-02-02 06:29:28.085370 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5 2026-02-02 06:29:28.085382 | orchestrator | 2026-02-02 06:29:28.085392 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-02 06:29:28.085403 | orchestrator | Monday 02 February 2026 06:29:15 +0000 (0:00:01.527) 0:55:42.809 ******* 2026-02-02 06:29:28.085414 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5 2026-02-02 06:29:28.085424 | orchestrator | 2026-02-02 06:29:28.085435 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-02 06:29:28.085446 | orchestrator | Monday 02 February 2026 06:29:16 +0000 (0:00:01.447) 0:55:44.256 ******* 2026-02-02 06:29:28.085457 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:29:28.085467 | orchestrator | 2026-02-02 06:29:28.085520 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-02 06:29:28.085542 | orchestrator | Monday 02 February 2026 06:29:18 +0000 (0:00:02.033) 0:55:46.290 ******* 2026-02-02 06:29:28.085561 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:29:28.085579 | orchestrator | 2026-02-02 06:29:28.085599 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-02 06:29:28.085618 | orchestrator | Monday 02 February 2026 06:29:20 +0000 (0:00:01.946) 0:55:48.237 ******* 2026-02-02 06:29:28.085637 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:29:28.085648 | orchestrator | 2026-02-02 06:29:28.085659 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-02 06:29:28.085670 | orchestrator | Monday 02 February 2026 06:29:22 +0000 (0:00:02.280) 0:55:50.517 ******* 2026-02-02 06:29:28.085680 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:29:28.085691 | orchestrator | 2026-02-02 06:29:28.085702 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-02 06:29:28.085713 | orchestrator | Monday 02 February 2026 06:29:25 +0000 (0:00:02.369) 0:55:52.886 ******* 2026-02-02 06:29:28.085723 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:29:28.085734 | orchestrator | 2026-02-02 06:29:28.085745 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-02-02 06:29:28.085755 | orchestrator | Monday 02 February 2026 06:29:26 +0000 (0:00:01.680) 0:55:54.567 ******* 2026-02-02 06:29:28.085775 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:29:59.656818 | orchestrator | 2026-02-02 06:29:59.656902 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-02-02 06:29:59.656912 | orchestrator | Monday 02 February 2026 06:29:28 +0000 (0:00:01.088) 0:55:55.655 ******* 2026-02-02 06:29:59.656937 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:29:59.656944 | orchestrator | 2026-02-02 06:29:59.656950 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-02-02 06:29:59.656956 | orchestrator | 2026-02-02 06:29:59.656973 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 06:29:59.656979 | orchestrator | Monday 02 February 2026 06:29:35 +0000 (0:00:07.801) 0:56:03.457 ******* 2026-02-02 06:29:59.656985 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4 2026-02-02 06:29:59.656992 | orchestrator | 2026-02-02 06:29:59.656997 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 06:29:59.657003 | orchestrator | Monday 02 February 2026 06:29:37 +0000 (0:00:01.220) 0:56:04.677 ******* 2026-02-02 06:29:59.657009 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:29:59.657015 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:29:59.657020 | orchestrator | 2026-02-02 06:29:59.657026 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 06:29:59.657032 | orchestrator | Monday 02 February 2026 06:29:38 +0000 (0:00:01.502) 0:56:06.180 ******* 2026-02-02 06:29:59.657037 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:29:59.657043 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:29:59.657049 | orchestrator | 2026-02-02 06:29:59.657054 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:29:59.657060 | orchestrator | Monday 02 February 2026 06:29:40 +0000 (0:00:01.553) 0:56:07.734 ******* 2026-02-02 06:29:59.657066 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:29:59.657071 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:29:59.657077 | orchestrator | 2026-02-02 06:29:59.657083 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:29:59.657089 | orchestrator | Monday 02 February 2026 06:29:41 +0000 (0:00:01.561) 0:56:09.295 ******* 2026-02-02 06:29:59.657094 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:29:59.657100 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:29:59.657106 | orchestrator | 2026-02-02 06:29:59.657111 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 06:29:59.657117 | orchestrator | Monday 02 February 2026 06:29:42 +0000 (0:00:01.222) 0:56:10.518 ******* 2026-02-02 06:29:59.657123 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:29:59.657128 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:29:59.657134 | orchestrator | 2026-02-02 06:29:59.657140 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 06:29:59.657145 | orchestrator | Monday 02 February 2026 06:29:44 +0000 (0:00:01.226) 0:56:11.744 ******* 2026-02-02 06:29:59.657152 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:29:59.657158 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:29:59.657164 | orchestrator | 2026-02-02 06:29:59.657170 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 06:29:59.657176 | orchestrator | Monday 02 February 2026 06:29:45 +0000 (0:00:01.422) 0:56:13.166 ******* 2026-02-02 06:29:59.657181 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:29:59.657188 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:29:59.657193 | orchestrator | 2026-02-02 06:29:59.657199 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 06:29:59.657205 | orchestrator | Monday 02 February 2026 06:29:46 +0000 (0:00:01.298) 0:56:14.465 ******* 2026-02-02 06:29:59.657211 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:29:59.657216 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:29:59.657222 | orchestrator | 2026-02-02 06:29:59.657228 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 06:29:59.657233 | orchestrator | Monday 02 February 2026 06:29:48 +0000 (0:00:01.338) 0:56:15.803 ******* 2026-02-02 06:29:59.657239 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:29:59.657245 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:29:59.657255 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:29:59.657261 | orchestrator | 2026-02-02 06:29:59.657267 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 06:29:59.657272 | orchestrator | Monday 02 February 2026 06:29:49 +0000 (0:00:01.774) 0:56:17.578 ******* 2026-02-02 06:29:59.657278 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:29:59.657284 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:29:59.657293 | orchestrator | 2026-02-02 06:29:59.657302 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 06:29:59.657312 | orchestrator | Monday 02 February 2026 06:29:51 +0000 (0:00:01.359) 0:56:18.937 ******* 2026-02-02 06:29:59.657321 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:29:59.657331 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:29:59.657340 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:29:59.657349 | orchestrator | 2026-02-02 06:29:59.657358 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 06:29:59.657367 | orchestrator | Monday 02 February 2026 06:29:54 +0000 (0:00:02.844) 0:56:21.782 ******* 2026-02-02 06:29:59.657377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 06:29:59.657386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 06:29:59.657396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 06:29:59.657406 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:29:59.657415 | orchestrator | 2026-02-02 06:29:59.657425 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 06:29:59.657435 | orchestrator | Monday 02 February 2026 06:29:55 +0000 (0:00:01.460) 0:56:23.243 ******* 2026-02-02 06:29:59.657462 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 06:29:59.657476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 06:29:59.657481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 06:29:59.657487 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:29:59.657492 | orchestrator | 2026-02-02 06:29:59.657516 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 06:29:59.657522 | orchestrator | Monday 02 February 2026 06:29:57 +0000 (0:00:01.640) 0:56:24.883 ******* 2026-02-02 06:29:59.657529 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:29:59.657537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:29:59.657543 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:29:59.657554 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:29:59.657560 | orchestrator | 2026-02-02 06:29:59.657565 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 06:29:59.657570 | orchestrator | Monday 02 February 2026 06:29:58 +0000 (0:00:01.164) 0:56:26.048 ******* 2026-02-02 06:29:59.657578 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 06:29:51.860141', 'end': '2026-02-02 06:29:51.914495', 'delta': '0:00:00.054354', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 06:29:59.657587 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c530967d0aad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 06:29:52.451095', 'end': '2026-02-02 06:29:52.493255', 'delta': '0:00:00.042160', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c530967d0aad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 06:29:59.657602 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a68c96a70534', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 06:29:53.008396', 'end': '2026-02-02 06:29:53.055501', 'delta': '0:00:00.047105', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a68c96a70534'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 06:30:18.814725 | orchestrator | 2026-02-02 06:30:18.814860 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 06:30:18.814878 | orchestrator | Monday 02 February 2026 06:29:59 +0000 (0:00:01.179) 0:56:27.228 ******* 2026-02-02 06:30:18.814890 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:30:18.814903 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:30:18.814914 | orchestrator | 2026-02-02 06:30:18.814925 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 06:30:18.814936 | orchestrator | Monday 02 February 2026 06:30:01 +0000 (0:00:01.451) 0:56:28.680 ******* 2026-02-02 06:30:18.814947 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:18.814959 | orchestrator | 2026-02-02 06:30:18.814970 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 06:30:18.814981 | orchestrator | Monday 02 February 2026 06:30:02 +0000 (0:00:01.212) 0:56:29.893 ******* 2026-02-02 06:30:18.814992 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:30:18.815003 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:30:18.815035 | orchestrator | 2026-02-02 06:30:18.815047 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 06:30:18.815058 | orchestrator | Monday 02 February 2026 06:30:03 +0000 (0:00:01.234) 0:56:31.127 ******* 2026-02-02 06:30:18.815069 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:30:18.815080 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:30:18.815091 | orchestrator | 2026-02-02 06:30:18.815102 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:30:18.815113 | orchestrator | Monday 02 February 2026 06:30:06 +0000 (0:00:02.636) 0:56:33.764 ******* 2026-02-02 06:30:18.815124 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:30:18.815135 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:30:18.815146 | orchestrator | 2026-02-02 06:30:18.815157 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 06:30:18.815168 | orchestrator | Monday 02 February 2026 06:30:07 +0000 (0:00:01.244) 0:56:35.008 ******* 2026-02-02 06:30:18.815178 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:18.815190 | orchestrator | 2026-02-02 06:30:18.815201 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 06:30:18.815212 | orchestrator | Monday 02 February 2026 06:30:08 +0000 (0:00:01.111) 0:56:36.120 ******* 2026-02-02 06:30:18.815223 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:18.815234 | orchestrator | 2026-02-02 06:30:18.815245 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:30:18.815256 | orchestrator | Monday 02 February 2026 06:30:09 +0000 (0:00:01.223) 0:56:37.344 ******* 2026-02-02 06:30:18.815266 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:18.815277 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:30:18.815288 | orchestrator | 2026-02-02 06:30:18.815299 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 06:30:18.815310 | orchestrator | Monday 02 February 2026 06:30:10 +0000 (0:00:01.208) 0:56:38.552 ******* 2026-02-02 06:30:18.815320 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:18.815332 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:30:18.815342 | orchestrator | 2026-02-02 06:30:18.815353 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 06:30:18.815364 | orchestrator | Monday 02 February 2026 06:30:12 +0000 (0:00:01.226) 0:56:39.779 ******* 2026-02-02 06:30:18.815375 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:30:18.815386 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:30:18.815397 | orchestrator | 2026-02-02 06:30:18.815407 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 06:30:18.815418 | orchestrator | Monday 02 February 2026 06:30:13 +0000 (0:00:01.261) 0:56:41.041 ******* 2026-02-02 06:30:18.815429 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:18.815440 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:30:18.815451 | orchestrator | 2026-02-02 06:30:18.815462 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 06:30:18.815472 | orchestrator | Monday 02 February 2026 06:30:14 +0000 (0:00:01.313) 0:56:42.354 ******* 2026-02-02 06:30:18.815483 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:30:18.815494 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:30:18.815505 | orchestrator | 2026-02-02 06:30:18.815584 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 06:30:18.815604 | orchestrator | Monday 02 February 2026 06:30:16 +0000 (0:00:01.275) 0:56:43.630 ******* 2026-02-02 06:30:18.815622 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:18.815640 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:30:18.815658 | orchestrator | 2026-02-02 06:30:18.815676 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 06:30:18.815695 | orchestrator | Monday 02 February 2026 06:30:17 +0000 (0:00:01.241) 0:56:44.871 ******* 2026-02-02 06:30:18.815713 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:30:18.815732 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:30:18.815788 | orchestrator | 2026-02-02 06:30:18.815801 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 06:30:18.815811 | orchestrator | Monday 02 February 2026 06:30:18 +0000 (0:00:01.256) 0:56:46.128 ******* 2026-02-02 06:30:18.815825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:18.815876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a', 'dm-uuid-LVM-nQNI9mGSypmWJN7Kribh0RNL5qLQKFSceYxT4mfzBYfoYiha3ZzoEdYR0rTnnIvK'], 'uuids': ['a78e3f4b-723a-42a3-abd4-4d699a55c416'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK']}})  2026-02-02 06:30:18.815894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6', 'scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c15f901f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:30:18.815907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HOxmXw-N5cX-V1Nz-Lu3r-OQk9-N5gG-1syyTi', 'scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4', 'scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379']}})  2026-02-02 06:30:18.815919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:18.815931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:18.815943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 06:30:18.815964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:18.815990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO', 'dm-uuid-CRYPT-LUKS2-8edeb25f170042ba8e6d0505727d2968-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:30:18.900687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:18.900790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:18.900808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379', 'dm-uuid-LVM-2Xx1rXy8ZvvzVeymXUM2Y23jmTeKUn30gyH8a84MHrJn7bcz7phSu8LEA3bm3DqO'], 'uuids': ['8edeb25f-1700-42ba-8e6d-0505727d2968'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO']}})  2026-02-02 06:30:18.900823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19', 'dm-uuid-LVM-7fojGdQjjxzlZ1d67G3lfXV0uQvvNrpG74l8TP6AWG5LY1LTlUkEVjmQPc2hTMkL'], 'uuids': ['0037b285-4ac2-45c2-8d5f-985073fa4cde'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL']}})  2026-02-02 06:30:18.900835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012', 'scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '076229ff', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:30:18.900887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yf6lEa-f3nO-iewk-DEDy-Fb6j-Kq2P-dbkgMf', 'scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc', 'scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a']}})  2026-02-02 06:30:18.900922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AITawh-CkpC-7L3c-Vqqe-GXUP-7eEh-WwcXRH', 'scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5', 'scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89']}})  2026-02-02 06:30:18.900934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:18.900946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:18.900960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2944b273', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:30:18.900998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:18.901018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:20.034344 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 06:30:20.034459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:20.034490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:20.034497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK', 'dm-uuid-CRYPT-LUKS2-a78e3f4b723a42a3abd44d699a55c416-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:30:20.034554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom', 'dm-uuid-CRYPT-LUKS2-6399826b15f3492994c0bc4d1d3bf1c1-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:30:20.034562 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:20.034568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:20.034585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89', 'dm-uuid-LVM-bGXwDmNnGJLl15xDO66UDgeGoDbpg8C0HvMSdsO6YcSLb4aDqGATNEcOudg8iQom'], 'uuids': ['6399826b-15f3-4929-94c0-bc4d1d3bf1c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom']}})  2026-02-02 06:30:20.034605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QbZaLy-yUYT-ccut-PcI7-2pGL-9PmJ-6NoPFr', 'scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28', 'scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19']}})  2026-02-02 06:30:20.034612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:20.034619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d8209b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:30:20.034629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:20.034637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:30:20.034646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL', 'dm-uuid-CRYPT-LUKS2-0037b2854ac245c28d5f985073fa4cde-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:30:20.249742 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:30:20.249865 | orchestrator | 2026-02-02 06:30:20.249880 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 06:30:20.249892 | orchestrator | Monday 02 February 2026 06:30:20 +0000 (0:00:01.478) 0:56:47.606 ******* 2026-02-02 06:30:20.249904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.249918 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a', 'dm-uuid-LVM-nQNI9mGSypmWJN7Kribh0RNL5qLQKFSceYxT4mfzBYfoYiha3ZzoEdYR0rTnnIvK'], 'uuids': ['a78e3f4b-723a-42a3-abd4-4d699a55c416'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.249954 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6', 'scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c15f901f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.249981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HOxmXw-N5cX-V1Nz-Lu3r-OQk9-N5gG-1syyTi', 'scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4', 'scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.250079 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.250104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.250121 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.250153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.250170 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19', 'dm-uuid-LVM-7fojGdQjjxzlZ1d67G3lfXV0uQvvNrpG74l8TP6AWG5LY1LTlUkEVjmQPc2hTMkL'], 'uuids': ['0037b285-4ac2-45c2-8d5f-985073fa4cde'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.250196 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.250218 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012', 'scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '076229ff', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.318896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO', 'dm-uuid-CRYPT-LUKS2-8edeb25f170042ba8e6d0505727d2968-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.319060 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AITawh-CkpC-7L3c-Vqqe-GXUP-7eEh-WwcXRH', 'scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5', 'scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.319093 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.319134 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.319156 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379', 'dm-uuid-LVM-2Xx1rXy8ZvvzVeymXUM2Y23jmTeKUn30gyH8a84MHrJn7bcz7phSu8LEA3bm3DqO'], 'uuids': ['8edeb25f-1700-42ba-8e6d-0505727d2968'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.319203 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.319224 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yf6lEa-f3nO-iewk-DEDy-Fb6j-Kq2P-dbkgMf', 'scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc', 'scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.319254 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.319266 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.319283 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.319306 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2944b273', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.376094 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom', 'dm-uuid-CRYPT-LUKS2-6399826b15f3492994c0bc4d1d3bf1c1-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.376217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.376235 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.376247 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.376260 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89', 'dm-uuid-LVM-bGXwDmNnGJLl15xDO66UDgeGoDbpg8C0HvMSdsO6YcSLb4aDqGATNEcOudg8iQom'], 'uuids': ['6399826b-15f3-4929-94c0-bc4d1d3bf1c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.376322 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK', 'dm-uuid-CRYPT-LUKS2-a78e3f4b723a42a3abd44d699a55c416-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.376337 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:20.376351 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QbZaLy-yUYT-ccut-PcI7-2pGL-9PmJ-6NoPFr', 'scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28', 'scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.376372 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:20.376394 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d8209b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:49.644962 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:49.645085 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:49.645120 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL', 'dm-uuid-CRYPT-LUKS2-0037b2854ac245c28d5f985073fa4cde-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:30:49.645136 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:30:49.645150 | orchestrator | 2026-02-02 06:30:49.645163 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 06:30:49.645175 | orchestrator | Monday 02 February 2026 06:30:21 +0000 (0:00:01.530) 0:56:49.136 ******* 2026-02-02 06:30:49.645186 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:30:49.645198 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:30:49.645208 | orchestrator | 2026-02-02 06:30:49.645219 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 06:30:49.645250 | orchestrator | Monday 02 February 2026 06:30:23 +0000 (0:00:01.738) 0:56:50.875 ******* 2026-02-02 06:30:49.645262 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:30:49.645273 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:30:49.645284 | orchestrator | 2026-02-02 06:30:49.645295 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:30:49.645306 | orchestrator | Monday 02 February 2026 06:30:24 +0000 (0:00:01.231) 0:56:52.106 ******* 2026-02-02 06:30:49.645316 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:30:49.645327 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:30:49.645337 | orchestrator | 2026-02-02 06:30:49.645348 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:30:49.645359 | orchestrator | Monday 02 February 2026 06:30:26 +0000 (0:00:01.564) 0:56:53.670 ******* 2026-02-02 06:30:49.645370 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:49.645380 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:30:49.645391 | orchestrator | 2026-02-02 06:30:49.645402 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:30:49.645413 | orchestrator | Monday 02 February 2026 06:30:27 +0000 (0:00:01.272) 0:56:54.943 ******* 2026-02-02 06:30:49.645423 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:49.645440 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:30:49.645458 | orchestrator | 2026-02-02 06:30:49.645476 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:30:49.645495 | orchestrator | Monday 02 February 2026 06:30:28 +0000 (0:00:01.328) 0:56:56.271 ******* 2026-02-02 06:30:49.645512 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:49.645587 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:30:49.645608 | orchestrator | 2026-02-02 06:30:49.645627 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 06:30:49.645646 | orchestrator | Monday 02 February 2026 06:30:29 +0000 (0:00:01.290) 0:56:57.562 ******* 2026-02-02 06:30:49.645666 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-02 06:30:49.645684 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-02 06:30:49.645703 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-02 06:30:49.645714 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-02 06:30:49.645725 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-02 06:30:49.645736 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-02 06:30:49.645746 | orchestrator | 2026-02-02 06:30:49.645757 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 06:30:49.645768 | orchestrator | Monday 02 February 2026 06:30:32 +0000 (0:00:02.211) 0:56:59.774 ******* 2026-02-02 06:30:49.645801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 06:30:49.645813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 06:30:49.645824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 06:30:49.645834 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:49.645845 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-02 06:30:49.645856 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-02 06:30:49.645866 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-02 06:30:49.645877 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:30:49.645888 | orchestrator | 2026-02-02 06:30:49.645899 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 06:30:49.645910 | orchestrator | Monday 02 February 2026 06:30:33 +0000 (0:00:01.417) 0:57:01.191 ******* 2026-02-02 06:30:49.645922 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4 2026-02-02 06:30:49.645933 | orchestrator | 2026-02-02 06:30:49.645945 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:30:49.645967 | orchestrator | Monday 02 February 2026 06:30:34 +0000 (0:00:01.294) 0:57:02.486 ******* 2026-02-02 06:30:49.645978 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:49.645989 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:30:49.646000 | orchestrator | 2026-02-02 06:30:49.646011 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:30:49.646084 | orchestrator | Monday 02 February 2026 06:30:36 +0000 (0:00:01.245) 0:57:03.731 ******* 2026-02-02 06:30:49.646097 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:49.646107 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:30:49.646118 | orchestrator | 2026-02-02 06:30:49.646136 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:30:49.646147 | orchestrator | Monday 02 February 2026 06:30:37 +0000 (0:00:01.304) 0:57:05.036 ******* 2026-02-02 06:30:49.646157 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:49.646168 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:30:49.646179 | orchestrator | 2026-02-02 06:30:49.646189 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:30:49.646200 | orchestrator | Monday 02 February 2026 06:30:38 +0000 (0:00:01.255) 0:57:06.291 ******* 2026-02-02 06:30:49.646211 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:30:49.646222 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:30:49.646237 | orchestrator | 2026-02-02 06:30:49.646257 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:30:49.646276 | orchestrator | Monday 02 February 2026 06:30:40 +0000 (0:00:01.345) 0:57:07.637 ******* 2026-02-02 06:30:49.646295 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:30:49.646316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:30:49.646334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:30:49.646353 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:49.646374 | orchestrator | 2026-02-02 06:30:49.646395 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:30:49.646414 | orchestrator | Monday 02 February 2026 06:30:41 +0000 (0:00:01.850) 0:57:09.487 ******* 2026-02-02 06:30:49.646436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:30:49.646457 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:30:49.646477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:30:49.646490 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:49.646501 | orchestrator | 2026-02-02 06:30:49.646511 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:30:49.646522 | orchestrator | Monday 02 February 2026 06:30:43 +0000 (0:00:01.400) 0:57:10.887 ******* 2026-02-02 06:30:49.646561 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:30:49.646573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:30:49.646584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:30:49.646595 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:30:49.646606 | orchestrator | 2026-02-02 06:30:49.646617 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:30:49.646628 | orchestrator | Monday 02 February 2026 06:30:44 +0000 (0:00:01.421) 0:57:12.309 ******* 2026-02-02 06:30:49.646638 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:30:49.646649 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:30:49.646660 | orchestrator | 2026-02-02 06:30:49.646671 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:30:49.646682 | orchestrator | Monday 02 February 2026 06:30:46 +0000 (0:00:01.294) 0:57:13.604 ******* 2026-02-02 06:30:49.646695 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 06:30:49.646713 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-02 06:30:49.646730 | orchestrator | 2026-02-02 06:30:49.646749 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 06:30:49.646782 | orchestrator | Monday 02 February 2026 06:30:47 +0000 (0:00:01.432) 0:57:15.037 ******* 2026-02-02 06:30:49.646800 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:30:49.646817 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:30:49.646835 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:30:49.646854 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-02 06:30:49.646873 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:30:49.646891 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:30:49.646926 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:31:33.480171 | orchestrator | 2026-02-02 06:31:33.480297 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 06:31:33.480315 | orchestrator | Monday 02 February 2026 06:30:49 +0000 (0:00:02.177) 0:57:17.214 ******* 2026-02-02 06:31:33.480326 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:31:33.480338 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:31:33.480348 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:31:33.480360 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-02 06:31:33.480371 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:31:33.480382 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:31:33.480392 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:31:33.480403 | orchestrator | 2026-02-02 06:31:33.480414 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-02-02 06:31:33.480425 | orchestrator | Monday 02 February 2026 06:30:52 +0000 (0:00:02.624) 0:57:19.839 ******* 2026-02-02 06:31:33.480436 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.480447 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.480458 | orchestrator | 2026-02-02 06:31:33.480469 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 06:31:33.480480 | orchestrator | Monday 02 February 2026 06:30:53 +0000 (0:00:01.233) 0:57:21.072 ******* 2026-02-02 06:31:33.480503 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4 2026-02-02 06:31:33.480514 | orchestrator | 2026-02-02 06:31:33.480526 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 06:31:33.480536 | orchestrator | Monday 02 February 2026 06:30:55 +0000 (0:00:01.576) 0:57:22.649 ******* 2026-02-02 06:31:33.480638 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4 2026-02-02 06:31:33.480664 | orchestrator | 2026-02-02 06:31:33.480682 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 06:31:33.480700 | orchestrator | Monday 02 February 2026 06:30:56 +0000 (0:00:01.287) 0:57:23.937 ******* 2026-02-02 06:31:33.480720 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.480739 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.480759 | orchestrator | 2026-02-02 06:31:33.480779 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 06:31:33.480797 | orchestrator | Monday 02 February 2026 06:30:57 +0000 (0:00:01.246) 0:57:25.184 ******* 2026-02-02 06:31:33.480817 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:31:33.480837 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:31:33.480857 | orchestrator | 2026-02-02 06:31:33.480876 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 06:31:33.480895 | orchestrator | Monday 02 February 2026 06:30:59 +0000 (0:00:01.609) 0:57:26.793 ******* 2026-02-02 06:31:33.480943 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:31:33.480957 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:31:33.480969 | orchestrator | 2026-02-02 06:31:33.480982 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 06:31:33.480995 | orchestrator | Monday 02 February 2026 06:31:00 +0000 (0:00:01.656) 0:57:28.450 ******* 2026-02-02 06:31:33.481007 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:31:33.481020 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:31:33.481032 | orchestrator | 2026-02-02 06:31:33.481044 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 06:31:33.481057 | orchestrator | Monday 02 February 2026 06:31:02 +0000 (0:00:01.644) 0:57:30.094 ******* 2026-02-02 06:31:33.481069 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.481082 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.481094 | orchestrator | 2026-02-02 06:31:33.481106 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 06:31:33.481117 | orchestrator | Monday 02 February 2026 06:31:03 +0000 (0:00:01.291) 0:57:31.385 ******* 2026-02-02 06:31:33.481127 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.481138 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.481149 | orchestrator | 2026-02-02 06:31:33.481160 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 06:31:33.481170 | orchestrator | Monday 02 February 2026 06:31:05 +0000 (0:00:01.286) 0:57:32.672 ******* 2026-02-02 06:31:33.481181 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.481192 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.481202 | orchestrator | 2026-02-02 06:31:33.481213 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 06:31:33.481224 | orchestrator | Monday 02 February 2026 06:31:06 +0000 (0:00:01.228) 0:57:33.900 ******* 2026-02-02 06:31:33.481234 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:31:33.481245 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:31:33.481255 | orchestrator | 2026-02-02 06:31:33.481266 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 06:31:33.481276 | orchestrator | Monday 02 February 2026 06:31:07 +0000 (0:00:01.616) 0:57:35.517 ******* 2026-02-02 06:31:33.481287 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:31:33.481297 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:31:33.481308 | orchestrator | 2026-02-02 06:31:33.481318 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 06:31:33.481329 | orchestrator | Monday 02 February 2026 06:31:09 +0000 (0:00:01.660) 0:57:37.178 ******* 2026-02-02 06:31:33.481340 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.481350 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.481361 | orchestrator | 2026-02-02 06:31:33.481371 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 06:31:33.481382 | orchestrator | Monday 02 February 2026 06:31:10 +0000 (0:00:01.331) 0:57:38.510 ******* 2026-02-02 06:31:33.481393 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.481422 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.481433 | orchestrator | 2026-02-02 06:31:33.481444 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 06:31:33.481454 | orchestrator | Monday 02 February 2026 06:31:12 +0000 (0:00:01.230) 0:57:39.740 ******* 2026-02-02 06:31:33.481465 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:31:33.481475 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:31:33.481486 | orchestrator | 2026-02-02 06:31:33.481497 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 06:31:33.481507 | orchestrator | Monday 02 February 2026 06:31:13 +0000 (0:00:01.209) 0:57:40.949 ******* 2026-02-02 06:31:33.481518 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:31:33.481528 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:31:33.481539 | orchestrator | 2026-02-02 06:31:33.481589 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 06:31:33.481612 | orchestrator | Monday 02 February 2026 06:31:14 +0000 (0:00:01.260) 0:57:42.210 ******* 2026-02-02 06:31:33.481623 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:31:33.481633 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:31:33.481644 | orchestrator | 2026-02-02 06:31:33.481654 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 06:31:33.481665 | orchestrator | Monday 02 February 2026 06:31:15 +0000 (0:00:01.204) 0:57:43.414 ******* 2026-02-02 06:31:33.481676 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.481686 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.481697 | orchestrator | 2026-02-02 06:31:33.481708 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 06:31:33.481718 | orchestrator | Monday 02 February 2026 06:31:17 +0000 (0:00:01.197) 0:57:44.611 ******* 2026-02-02 06:31:33.481729 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.481740 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.481750 | orchestrator | 2026-02-02 06:31:33.481769 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 06:31:33.481780 | orchestrator | Monday 02 February 2026 06:31:18 +0000 (0:00:01.215) 0:57:45.827 ******* 2026-02-02 06:31:33.481790 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.481801 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.481812 | orchestrator | 2026-02-02 06:31:33.481822 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 06:31:33.481833 | orchestrator | Monday 02 February 2026 06:31:19 +0000 (0:00:01.568) 0:57:47.395 ******* 2026-02-02 06:31:33.481844 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:31:33.481854 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:31:33.481865 | orchestrator | 2026-02-02 06:31:33.481875 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 06:31:33.481886 | orchestrator | Monday 02 February 2026 06:31:21 +0000 (0:00:01.340) 0:57:48.736 ******* 2026-02-02 06:31:33.481896 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:31:33.481907 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:31:33.481918 | orchestrator | 2026-02-02 06:31:33.481928 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 06:31:33.481939 | orchestrator | Monday 02 February 2026 06:31:22 +0000 (0:00:01.229) 0:57:49.966 ******* 2026-02-02 06:31:33.481949 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.481960 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.481970 | orchestrator | 2026-02-02 06:31:33.481981 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 06:31:33.481992 | orchestrator | Monday 02 February 2026 06:31:23 +0000 (0:00:01.259) 0:57:51.225 ******* 2026-02-02 06:31:33.482002 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.482065 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.482078 | orchestrator | 2026-02-02 06:31:33.482088 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 06:31:33.482099 | orchestrator | Monday 02 February 2026 06:31:24 +0000 (0:00:01.265) 0:57:52.490 ******* 2026-02-02 06:31:33.482109 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.482120 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.482130 | orchestrator | 2026-02-02 06:31:33.482141 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 06:31:33.482152 | orchestrator | Monday 02 February 2026 06:31:26 +0000 (0:00:01.386) 0:57:53.877 ******* 2026-02-02 06:31:33.482162 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.482173 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.482184 | orchestrator | 2026-02-02 06:31:33.482195 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 06:31:33.482205 | orchestrator | Monday 02 February 2026 06:31:27 +0000 (0:00:01.140) 0:57:55.018 ******* 2026-02-02 06:31:33.482216 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.482226 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.482237 | orchestrator | 2026-02-02 06:31:33.482255 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 06:31:33.482266 | orchestrator | Monday 02 February 2026 06:31:28 +0000 (0:00:01.157) 0:57:56.175 ******* 2026-02-02 06:31:33.482276 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.482287 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.482298 | orchestrator | 2026-02-02 06:31:33.482309 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 06:31:33.482319 | orchestrator | Monday 02 February 2026 06:31:29 +0000 (0:00:01.229) 0:57:57.404 ******* 2026-02-02 06:31:33.482330 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.482340 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.482351 | orchestrator | 2026-02-02 06:31:33.482362 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 06:31:33.482372 | orchestrator | Monday 02 February 2026 06:31:30 +0000 (0:00:01.178) 0:57:58.583 ******* 2026-02-02 06:31:33.482383 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.482394 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.482404 | orchestrator | 2026-02-02 06:31:33.482415 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 06:31:33.482425 | orchestrator | Monday 02 February 2026 06:31:32 +0000 (0:00:01.238) 0:57:59.822 ******* 2026-02-02 06:31:33.482436 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:31:33.482447 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:31:33.482457 | orchestrator | 2026-02-02 06:31:33.482476 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 06:32:18.211043 | orchestrator | Monday 02 February 2026 06:31:33 +0000 (0:00:01.227) 0:58:01.050 ******* 2026-02-02 06:32:18.211135 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211147 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.211155 | orchestrator | 2026-02-02 06:32:18.211163 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 06:32:18.211171 | orchestrator | Monday 02 February 2026 06:31:34 +0000 (0:00:01.175) 0:58:02.225 ******* 2026-02-02 06:32:18.211178 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211200 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.211208 | orchestrator | 2026-02-02 06:32:18.211215 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 06:32:18.211223 | orchestrator | Monday 02 February 2026 06:31:35 +0000 (0:00:01.201) 0:58:03.426 ******* 2026-02-02 06:32:18.211230 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211237 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.211245 | orchestrator | 2026-02-02 06:32:18.211252 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 06:32:18.211259 | orchestrator | Monday 02 February 2026 06:31:37 +0000 (0:00:01.338) 0:58:04.765 ******* 2026-02-02 06:32:18.211266 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:32:18.211275 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:32:18.211282 | orchestrator | 2026-02-02 06:32:18.211289 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 06:32:18.211296 | orchestrator | Monday 02 February 2026 06:31:39 +0000 (0:00:02.035) 0:58:06.800 ******* 2026-02-02 06:32:18.211303 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:32:18.211310 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:32:18.211318 | orchestrator | 2026-02-02 06:32:18.211325 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 06:32:18.211346 | orchestrator | Monday 02 February 2026 06:31:41 +0000 (0:00:02.448) 0:58:09.248 ******* 2026-02-02 06:32:18.211354 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4 2026-02-02 06:32:18.211362 | orchestrator | 2026-02-02 06:32:18.211369 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 06:32:18.211376 | orchestrator | Monday 02 February 2026 06:31:43 +0000 (0:00:01.445) 0:58:10.694 ******* 2026-02-02 06:32:18.211384 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211407 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.211415 | orchestrator | 2026-02-02 06:32:18.211422 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 06:32:18.211429 | orchestrator | Monday 02 February 2026 06:31:44 +0000 (0:00:01.240) 0:58:11.934 ******* 2026-02-02 06:32:18.211436 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211443 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.211450 | orchestrator | 2026-02-02 06:32:18.211458 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 06:32:18.211465 | orchestrator | Monday 02 February 2026 06:31:45 +0000 (0:00:01.293) 0:58:13.227 ******* 2026-02-02 06:32:18.211472 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 06:32:18.211479 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 06:32:18.211486 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 06:32:18.211493 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 06:32:18.211500 | orchestrator | 2026-02-02 06:32:18.211508 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 06:32:18.211515 | orchestrator | Monday 02 February 2026 06:31:47 +0000 (0:00:01.922) 0:58:15.150 ******* 2026-02-02 06:32:18.211522 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:32:18.211529 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:32:18.211536 | orchestrator | 2026-02-02 06:32:18.211543 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 06:32:18.211550 | orchestrator | Monday 02 February 2026 06:31:49 +0000 (0:00:01.546) 0:58:16.697 ******* 2026-02-02 06:32:18.211557 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211565 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.211589 | orchestrator | 2026-02-02 06:32:18.211597 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 06:32:18.211604 | orchestrator | Monday 02 February 2026 06:31:50 +0000 (0:00:01.286) 0:58:17.984 ******* 2026-02-02 06:32:18.211612 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211619 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.211626 | orchestrator | 2026-02-02 06:32:18.211633 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 06:32:18.211640 | orchestrator | Monday 02 February 2026 06:31:51 +0000 (0:00:01.360) 0:58:19.344 ******* 2026-02-02 06:32:18.211647 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211654 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.211661 | orchestrator | 2026-02-02 06:32:18.211668 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 06:32:18.211675 | orchestrator | Monday 02 February 2026 06:31:52 +0000 (0:00:01.211) 0:58:20.556 ******* 2026-02-02 06:32:18.211683 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4 2026-02-02 06:32:18.211690 | orchestrator | 2026-02-02 06:32:18.211697 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 06:32:18.211704 | orchestrator | Monday 02 February 2026 06:31:54 +0000 (0:00:01.328) 0:58:21.884 ******* 2026-02-02 06:32:18.211711 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:32:18.211718 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:32:18.211725 | orchestrator | 2026-02-02 06:32:18.211733 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 06:32:18.211740 | orchestrator | Monday 02 February 2026 06:31:56 +0000 (0:00:01.822) 0:58:23.707 ******* 2026-02-02 06:32:18.211747 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 06:32:18.211768 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 06:32:18.211776 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 06:32:18.211790 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211798 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 06:32:18.211805 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 06:32:18.211812 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 06:32:18.211819 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.211826 | orchestrator | 2026-02-02 06:32:18.211833 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 06:32:18.211840 | orchestrator | Monday 02 February 2026 06:31:57 +0000 (0:00:01.352) 0:58:25.059 ******* 2026-02-02 06:32:18.211847 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211854 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.211861 | orchestrator | 2026-02-02 06:32:18.211868 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 06:32:18.211876 | orchestrator | Monday 02 February 2026 06:31:58 +0000 (0:00:01.240) 0:58:26.299 ******* 2026-02-02 06:32:18.211883 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211890 | orchestrator | 2026-02-02 06:32:18.211897 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 06:32:18.211904 | orchestrator | Monday 02 February 2026 06:31:59 +0000 (0:00:01.203) 0:58:27.503 ******* 2026-02-02 06:32:18.211911 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211918 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.211925 | orchestrator | 2026-02-02 06:32:18.211936 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 06:32:18.211944 | orchestrator | Monday 02 February 2026 06:32:01 +0000 (0:00:01.295) 0:58:28.799 ******* 2026-02-02 06:32:18.211951 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211958 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.211965 | orchestrator | 2026-02-02 06:32:18.211972 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 06:32:18.211979 | orchestrator | Monday 02 February 2026 06:32:02 +0000 (0:00:01.256) 0:58:30.056 ******* 2026-02-02 06:32:18.211986 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.211993 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.212001 | orchestrator | 2026-02-02 06:32:18.212008 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 06:32:18.212015 | orchestrator | Monday 02 February 2026 06:32:03 +0000 (0:00:01.232) 0:58:31.288 ******* 2026-02-02 06:32:18.212022 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:32:18.212029 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:32:18.212036 | orchestrator | 2026-02-02 06:32:18.212044 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 06:32:18.212051 | orchestrator | Monday 02 February 2026 06:32:06 +0000 (0:00:02.615) 0:58:33.903 ******* 2026-02-02 06:32:18.212058 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:32:18.212065 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:32:18.212072 | orchestrator | 2026-02-02 06:32:18.212079 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 06:32:18.212086 | orchestrator | Monday 02 February 2026 06:32:07 +0000 (0:00:01.221) 0:58:35.125 ******* 2026-02-02 06:32:18.212093 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4 2026-02-02 06:32:18.212101 | orchestrator | 2026-02-02 06:32:18.212108 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 06:32:18.212115 | orchestrator | Monday 02 February 2026 06:32:08 +0000 (0:00:01.435) 0:58:36.560 ******* 2026-02-02 06:32:18.212122 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.212129 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.212136 | orchestrator | 2026-02-02 06:32:18.212144 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 06:32:18.212151 | orchestrator | Monday 02 February 2026 06:32:10 +0000 (0:00:01.240) 0:58:37.801 ******* 2026-02-02 06:32:18.212162 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.212170 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.212177 | orchestrator | 2026-02-02 06:32:18.212184 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 06:32:18.212191 | orchestrator | Monday 02 February 2026 06:32:11 +0000 (0:00:01.273) 0:58:39.074 ******* 2026-02-02 06:32:18.212198 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.212205 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.212212 | orchestrator | 2026-02-02 06:32:18.212219 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 06:32:18.212226 | orchestrator | Monday 02 February 2026 06:32:12 +0000 (0:00:01.232) 0:58:40.307 ******* 2026-02-02 06:32:18.212233 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.212241 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.212248 | orchestrator | 2026-02-02 06:32:18.212255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 06:32:18.212262 | orchestrator | Monday 02 February 2026 06:32:14 +0000 (0:00:01.291) 0:58:41.598 ******* 2026-02-02 06:32:18.212269 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.212276 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.212283 | orchestrator | 2026-02-02 06:32:18.212290 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 06:32:18.212297 | orchestrator | Monday 02 February 2026 06:32:15 +0000 (0:00:01.246) 0:58:42.845 ******* 2026-02-02 06:32:18.212304 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.212311 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.212319 | orchestrator | 2026-02-02 06:32:18.212326 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 06:32:18.212333 | orchestrator | Monday 02 February 2026 06:32:16 +0000 (0:00:01.229) 0:58:44.075 ******* 2026-02-02 06:32:18.212340 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:18.212347 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:18.212354 | orchestrator | 2026-02-02 06:32:18.212365 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 06:32:58.854405 | orchestrator | Monday 02 February 2026 06:32:18 +0000 (0:00:01.706) 0:58:45.782 ******* 2026-02-02 06:32:58.854542 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:58.854568 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:58.854699 | orchestrator | 2026-02-02 06:32:58.854724 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 06:32:58.854742 | orchestrator | Monday 02 February 2026 06:32:19 +0000 (0:00:01.259) 0:58:47.041 ******* 2026-02-02 06:32:58.854759 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:32:58.854778 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:32:58.854789 | orchestrator | 2026-02-02 06:32:58.854799 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 06:32:58.854809 | orchestrator | Monday 02 February 2026 06:32:20 +0000 (0:00:01.281) 0:58:48.323 ******* 2026-02-02 06:32:58.854820 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4 2026-02-02 06:32:58.854830 | orchestrator | 2026-02-02 06:32:58.854840 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 06:32:58.854930 | orchestrator | Monday 02 February 2026 06:32:22 +0000 (0:00:01.279) 0:58:49.602 ******* 2026-02-02 06:32:58.854942 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-02 06:32:58.854955 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-02 06:32:58.854966 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-02 06:32:58.854978 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-02 06:32:58.854989 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-02 06:32:58.855000 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-02 06:32:58.855028 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-02 06:32:58.855040 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-02 06:32:58.855074 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-02 06:32:58.855086 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-02 06:32:58.855097 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-02 06:32:58.855108 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-02 06:32:58.855119 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-02 06:32:58.855131 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-02 06:32:58.855142 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-02 06:32:58.855153 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-02 06:32:58.855164 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 06:32:58.855175 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 06:32:58.855185 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 06:32:58.855195 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 06:32:58.855204 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 06:32:58.855214 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 06:32:58.855223 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 06:32:58.855233 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 06:32:58.855243 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 06:32:58.855252 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 06:32:58.855262 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 06:32:58.855271 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 06:32:58.855281 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-02 06:32:58.855291 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-02 06:32:58.855300 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-02 06:32:58.855310 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-02 06:32:58.855319 | orchestrator | 2026-02-02 06:32:58.855329 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 06:32:58.855339 | orchestrator | Monday 02 February 2026 06:32:28 +0000 (0:00:06.547) 0:58:56.149 ******* 2026-02-02 06:32:58.855348 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4 2026-02-02 06:32:58.855358 | orchestrator | 2026-02-02 06:32:58.855368 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-02 06:32:58.855377 | orchestrator | Monday 02 February 2026 06:32:29 +0000 (0:00:01.243) 0:58:57.393 ******* 2026-02-02 06:32:58.855388 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 06:32:58.855400 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 06:32:58.855409 | orchestrator | 2026-02-02 06:32:58.855419 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-02 06:32:58.855429 | orchestrator | Monday 02 February 2026 06:32:31 +0000 (0:00:01.613) 0:58:59.007 ******* 2026-02-02 06:32:58.855438 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 06:32:58.855448 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 06:32:58.855458 | orchestrator | 2026-02-02 06:32:58.855468 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 06:32:58.855499 | orchestrator | Monday 02 February 2026 06:32:33 +0000 (0:00:02.166) 0:59:01.174 ******* 2026-02-02 06:32:58.855518 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:58.855528 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:58.855538 | orchestrator | 2026-02-02 06:32:58.855548 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 06:32:58.855557 | orchestrator | Monday 02 February 2026 06:32:34 +0000 (0:00:01.229) 0:59:02.404 ******* 2026-02-02 06:32:58.855567 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:58.855604 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:58.855623 | orchestrator | 2026-02-02 06:32:58.855639 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 06:32:58.855655 | orchestrator | Monday 02 February 2026 06:32:36 +0000 (0:00:01.265) 0:59:03.669 ******* 2026-02-02 06:32:58.855671 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:58.855686 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:58.855702 | orchestrator | 2026-02-02 06:32:58.855719 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 06:32:58.855736 | orchestrator | Monday 02 February 2026 06:32:37 +0000 (0:00:01.548) 0:59:05.217 ******* 2026-02-02 06:32:58.855753 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:58.855769 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:58.855784 | orchestrator | 2026-02-02 06:32:58.855794 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 06:32:58.855803 | orchestrator | Monday 02 February 2026 06:32:38 +0000 (0:00:01.212) 0:59:06.430 ******* 2026-02-02 06:32:58.855813 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:58.855822 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:58.855831 | orchestrator | 2026-02-02 06:32:58.855848 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 06:32:58.855858 | orchestrator | Monday 02 February 2026 06:32:40 +0000 (0:00:01.213) 0:59:07.644 ******* 2026-02-02 06:32:58.855867 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:58.855877 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:58.855886 | orchestrator | 2026-02-02 06:32:58.855896 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 06:32:58.855906 | orchestrator | Monday 02 February 2026 06:32:41 +0000 (0:00:01.352) 0:59:08.997 ******* 2026-02-02 06:32:58.855915 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:58.855925 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:58.855934 | orchestrator | 2026-02-02 06:32:58.855943 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 06:32:58.855953 | orchestrator | Monday 02 February 2026 06:32:42 +0000 (0:00:01.235) 0:59:10.233 ******* 2026-02-02 06:32:58.855962 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:58.855972 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:58.855982 | orchestrator | 2026-02-02 06:32:58.855991 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 06:32:58.856001 | orchestrator | Monday 02 February 2026 06:32:43 +0000 (0:00:01.315) 0:59:11.548 ******* 2026-02-02 06:32:58.856010 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:58.856020 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:58.856029 | orchestrator | 2026-02-02 06:32:58.856042 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 06:32:58.856059 | orchestrator | Monday 02 February 2026 06:32:45 +0000 (0:00:01.283) 0:59:12.832 ******* 2026-02-02 06:32:58.856075 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:58.856090 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:58.856105 | orchestrator | 2026-02-02 06:32:58.856121 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 06:32:58.856136 | orchestrator | Monday 02 February 2026 06:32:46 +0000 (0:00:01.582) 0:59:14.414 ******* 2026-02-02 06:32:58.856152 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:32:58.856166 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:32:58.856192 | orchestrator | 2026-02-02 06:32:58.856208 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 06:32:58.856225 | orchestrator | Monday 02 February 2026 06:32:48 +0000 (0:00:01.271) 0:59:15.685 ******* 2026-02-02 06:32:58.856240 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-02 06:32:58.856256 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-02 06:32:58.856272 | orchestrator | 2026-02-02 06:32:58.856289 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 06:32:58.856305 | orchestrator | Monday 02 February 2026 06:32:52 +0000 (0:00:04.474) 0:59:20.160 ******* 2026-02-02 06:32:58.856320 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 06:32:58.856336 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 06:32:58.856352 | orchestrator | 2026-02-02 06:32:58.856368 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 06:32:58.856385 | orchestrator | Monday 02 February 2026 06:32:53 +0000 (0:00:01.308) 0:59:21.468 ******* 2026-02-02 06:32:58.856406 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-02 06:32:58.856438 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-02 06:33:46.872177 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-02 06:33:46.872306 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-02 06:33:46.872328 | orchestrator | 2026-02-02 06:33:46.872347 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 06:33:46.872364 | orchestrator | Monday 02 February 2026 06:32:58 +0000 (0:00:04.954) 0:59:26.423 ******* 2026-02-02 06:33:46.872378 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:33:46.872395 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:33:46.872410 | orchestrator | 2026-02-02 06:33:46.872425 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 06:33:46.872458 | orchestrator | Monday 02 February 2026 06:33:00 +0000 (0:00:01.270) 0:59:27.694 ******* 2026-02-02 06:33:46.872474 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:33:46.872489 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:33:46.872504 | orchestrator | 2026-02-02 06:33:46.872519 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:33:46.872535 | orchestrator | Monday 02 February 2026 06:33:01 +0000 (0:00:01.324) 0:59:29.019 ******* 2026-02-02 06:33:46.872549 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:33:46.872563 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:33:46.872579 | orchestrator | 2026-02-02 06:33:46.872590 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:33:46.872650 | orchestrator | Monday 02 February 2026 06:33:02 +0000 (0:00:01.237) 0:59:30.256 ******* 2026-02-02 06:33:46.872662 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:33:46.872671 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:33:46.872679 | orchestrator | 2026-02-02 06:33:46.872688 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:33:46.872696 | orchestrator | Monday 02 February 2026 06:33:03 +0000 (0:00:01.307) 0:59:31.563 ******* 2026-02-02 06:33:46.872706 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:33:46.872716 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:33:46.872725 | orchestrator | 2026-02-02 06:33:46.872735 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:33:46.872746 | orchestrator | Monday 02 February 2026 06:33:05 +0000 (0:00:01.285) 0:59:32.849 ******* 2026-02-02 06:33:46.872760 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:33:46.872777 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:33:46.872793 | orchestrator | 2026-02-02 06:33:46.872809 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:33:46.872826 | orchestrator | Monday 02 February 2026 06:33:06 +0000 (0:00:01.390) 0:59:34.239 ******* 2026-02-02 06:33:46.872843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:33:46.872860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:33:46.872878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:33:46.872895 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:33:46.872907 | orchestrator | 2026-02-02 06:33:46.872916 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:33:46.872926 | orchestrator | Monday 02 February 2026 06:33:08 +0000 (0:00:01.488) 0:59:35.728 ******* 2026-02-02 06:33:46.872936 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:33:46.872946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:33:46.872956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:33:46.872965 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:33:46.872975 | orchestrator | 2026-02-02 06:33:46.872985 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:33:46.872996 | orchestrator | Monday 02 February 2026 06:33:09 +0000 (0:00:01.393) 0:59:37.121 ******* 2026-02-02 06:33:46.873005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:33:46.873015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:33:46.873025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:33:46.873035 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:33:46.873045 | orchestrator | 2026-02-02 06:33:46.873055 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:33:46.873065 | orchestrator | Monday 02 February 2026 06:33:11 +0000 (0:00:01.823) 0:59:38.944 ******* 2026-02-02 06:33:46.873075 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:33:46.873085 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:33:46.873095 | orchestrator | 2026-02-02 06:33:46.873105 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:33:46.873115 | orchestrator | Monday 02 February 2026 06:33:12 +0000 (0:00:01.315) 0:59:40.260 ******* 2026-02-02 06:33:46.873124 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 06:33:46.873133 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-02 06:33:46.873141 | orchestrator | 2026-02-02 06:33:46.873149 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 06:33:46.873158 | orchestrator | Monday 02 February 2026 06:33:14 +0000 (0:00:01.436) 0:59:41.697 ******* 2026-02-02 06:33:46.873166 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:33:46.873175 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:33:46.873183 | orchestrator | 2026-02-02 06:33:46.873212 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-02 06:33:46.873237 | orchestrator | Monday 02 February 2026 06:33:15 +0000 (0:00:01.846) 0:59:43.544 ******* 2026-02-02 06:33:46.873252 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:33:46.873266 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:33:46.873280 | orchestrator | 2026-02-02 06:33:46.873294 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-02 06:33:46.873309 | orchestrator | Monday 02 February 2026 06:33:17 +0000 (0:00:01.256) 0:59:44.800 ******* 2026-02-02 06:33:46.873323 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4 2026-02-02 06:33:46.873340 | orchestrator | 2026-02-02 06:33:46.873355 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-02 06:33:46.873366 | orchestrator | Monday 02 February 2026 06:33:18 +0000 (0:00:01.430) 0:59:46.231 ******* 2026-02-02 06:33:46.873375 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-02 06:33:46.873383 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-02 06:33:46.873391 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-02 06:33:46.873400 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-02 06:33:46.873408 | orchestrator | 2026-02-02 06:33:46.873417 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-02 06:33:46.873432 | orchestrator | Monday 02 February 2026 06:33:20 +0000 (0:00:01.970) 0:59:48.202 ******* 2026-02-02 06:33:46.873440 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:33:46.873449 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 06:33:46.873458 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 06:33:46.873466 | orchestrator | 2026-02-02 06:33:46.873475 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:33:46.873483 | orchestrator | Monday 02 February 2026 06:33:23 +0000 (0:00:03.145) 0:59:51.348 ******* 2026-02-02 06:33:46.873492 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-02 06:33:46.873500 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 06:33:46.873509 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:33:46.873517 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-02 06:33:46.873526 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-02 06:33:46.873534 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:33:46.873543 | orchestrator | 2026-02-02 06:33:46.873551 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-02 06:33:46.873559 | orchestrator | Monday 02 February 2026 06:33:25 +0000 (0:00:02.039) 0:59:53.388 ******* 2026-02-02 06:33:46.873568 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:33:46.873576 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:33:46.873585 | orchestrator | 2026-02-02 06:33:46.873594 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-02 06:33:46.873656 | orchestrator | Monday 02 February 2026 06:33:27 +0000 (0:00:01.706) 0:59:55.095 ******* 2026-02-02 06:33:46.873668 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:33:46.873676 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:33:46.873685 | orchestrator | 2026-02-02 06:33:46.873693 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-02 06:33:46.873702 | orchestrator | Monday 02 February 2026 06:33:28 +0000 (0:00:01.259) 0:59:56.354 ******* 2026-02-02 06:33:46.873710 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4 2026-02-02 06:33:46.873719 | orchestrator | 2026-02-02 06:33:46.873728 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-02 06:33:46.873736 | orchestrator | Monday 02 February 2026 06:33:30 +0000 (0:00:01.397) 0:59:57.752 ******* 2026-02-02 06:33:46.873745 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4 2026-02-02 06:33:46.873753 | orchestrator | 2026-02-02 06:33:46.873762 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-02 06:33:46.873777 | orchestrator | Monday 02 February 2026 06:33:31 +0000 (0:00:01.265) 0:59:59.018 ******* 2026-02-02 06:33:46.873786 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:33:46.873794 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:33:46.873803 | orchestrator | 2026-02-02 06:33:46.873812 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-02 06:33:46.873820 | orchestrator | Monday 02 February 2026 06:33:33 +0000 (0:00:02.135) 1:00:01.154 ******* 2026-02-02 06:33:46.873829 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:33:46.873837 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:33:46.873846 | orchestrator | 2026-02-02 06:33:46.873854 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-02 06:33:46.873863 | orchestrator | Monday 02 February 2026 06:33:35 +0000 (0:00:02.009) 1:00:03.163 ******* 2026-02-02 06:33:46.873871 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:33:46.873880 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:33:46.873888 | orchestrator | 2026-02-02 06:33:46.873897 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-02 06:33:46.873912 | orchestrator | Monday 02 February 2026 06:33:37 +0000 (0:00:02.288) 1:00:05.451 ******* 2026-02-02 06:33:46.873927 | orchestrator | changed: [testbed-node-3] 2026-02-02 06:33:46.873943 | orchestrator | changed: [testbed-node-4] 2026-02-02 06:33:46.873957 | orchestrator | 2026-02-02 06:33:46.873972 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-02 06:33:46.873988 | orchestrator | Monday 02 February 2026 06:33:41 +0000 (0:00:03.516) 1:00:08.968 ******* 2026-02-02 06:33:46.874005 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:33:46.874080 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:33:46.874093 | orchestrator | 2026-02-02 06:33:46.874101 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-02-02 06:33:46.874110 | orchestrator | Monday 02 February 2026 06:33:43 +0000 (0:00:02.148) 1:00:11.116 ******* 2026-02-02 06:33:46.874118 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:33:46.874136 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:34:09.602755 | orchestrator | 2026-02-02 06:34:09.602870 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-02 06:34:09.602886 | orchestrator | 2026-02-02 06:34:09.602899 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 06:34:09.602910 | orchestrator | Monday 02 February 2026 06:33:46 +0000 (0:00:03.316) 1:00:14.433 ******* 2026-02-02 06:34:09.602921 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-02 06:34:09.602933 | orchestrator | 2026-02-02 06:34:09.602945 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 06:34:09.602955 | orchestrator | Monday 02 February 2026 06:33:47 +0000 (0:00:01.145) 1:00:15.578 ******* 2026-02-02 06:34:09.602967 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:09.602979 | orchestrator | 2026-02-02 06:34:09.602990 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 06:34:09.603001 | orchestrator | Monday 02 February 2026 06:33:49 +0000 (0:00:01.487) 1:00:17.065 ******* 2026-02-02 06:34:09.603012 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:09.603023 | orchestrator | 2026-02-02 06:34:09.603034 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:34:09.603045 | orchestrator | Monday 02 February 2026 06:33:50 +0000 (0:00:01.127) 1:00:18.193 ******* 2026-02-02 06:34:09.603056 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:09.603066 | orchestrator | 2026-02-02 06:34:09.603077 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:34:09.603103 | orchestrator | Monday 02 February 2026 06:33:52 +0000 (0:00:01.460) 1:00:19.653 ******* 2026-02-02 06:34:09.603115 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:09.603126 | orchestrator | 2026-02-02 06:34:09.603137 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 06:34:09.603171 | orchestrator | Monday 02 February 2026 06:33:53 +0000 (0:00:01.137) 1:00:20.790 ******* 2026-02-02 06:34:09.603183 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:09.603193 | orchestrator | 2026-02-02 06:34:09.603204 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 06:34:09.603215 | orchestrator | Monday 02 February 2026 06:33:54 +0000 (0:00:01.250) 1:00:22.041 ******* 2026-02-02 06:34:09.603226 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:09.603237 | orchestrator | 2026-02-02 06:34:09.603248 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 06:34:09.603260 | orchestrator | Monday 02 February 2026 06:33:55 +0000 (0:00:01.248) 1:00:23.289 ******* 2026-02-02 06:34:09.603271 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:09.603282 | orchestrator | 2026-02-02 06:34:09.603293 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 06:34:09.603304 | orchestrator | Monday 02 February 2026 06:33:56 +0000 (0:00:01.159) 1:00:24.449 ******* 2026-02-02 06:34:09.603315 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:09.603326 | orchestrator | 2026-02-02 06:34:09.603337 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 06:34:09.603348 | orchestrator | Monday 02 February 2026 06:33:58 +0000 (0:00:01.171) 1:00:25.620 ******* 2026-02-02 06:34:09.603359 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:34:09.603370 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:34:09.603381 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:34:09.603392 | orchestrator | 2026-02-02 06:34:09.603403 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 06:34:09.603414 | orchestrator | Monday 02 February 2026 06:33:59 +0000 (0:00:01.663) 1:00:27.284 ******* 2026-02-02 06:34:09.603425 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:09.603435 | orchestrator | 2026-02-02 06:34:09.603447 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 06:34:09.603458 | orchestrator | Monday 02 February 2026 06:34:00 +0000 (0:00:01.222) 1:00:28.507 ******* 2026-02-02 06:34:09.603469 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:34:09.603479 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:34:09.603490 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:34:09.603501 | orchestrator | 2026-02-02 06:34:09.603512 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 06:34:09.603523 | orchestrator | Monday 02 February 2026 06:34:03 +0000 (0:00:02.901) 1:00:31.409 ******* 2026-02-02 06:34:09.603534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 06:34:09.603545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 06:34:09.603556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 06:34:09.603567 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:09.603578 | orchestrator | 2026-02-02 06:34:09.603589 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 06:34:09.603600 | orchestrator | Monday 02 February 2026 06:34:05 +0000 (0:00:01.408) 1:00:32.818 ******* 2026-02-02 06:34:09.603645 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 06:34:09.603660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 06:34:09.603690 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 06:34:09.603710 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:09.603721 | orchestrator | 2026-02-02 06:34:09.603733 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 06:34:09.603744 | orchestrator | Monday 02 February 2026 06:34:07 +0000 (0:00:01.960) 1:00:34.778 ******* 2026-02-02 06:34:09.603757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:09.603776 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:09.603788 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:09.603800 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:09.603811 | orchestrator | 2026-02-02 06:34:09.603822 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 06:34:09.603833 | orchestrator | Monday 02 February 2026 06:34:08 +0000 (0:00:01.177) 1:00:35.955 ******* 2026-02-02 06:34:09.603846 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 06:34:01.478980', 'end': '2026-02-02 06:34:01.523529', 'delta': '0:00:00.044549', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 06:34:09.603861 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c530967d0aad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 06:34:02.043093', 'end': '2026-02-02 06:34:02.094911', 'delta': '0:00:00.051818', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c530967d0aad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 06:34:09.603873 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a68c96a70534', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 06:34:02.603622', 'end': '2026-02-02 06:34:02.657914', 'delta': '0:00:00.054292', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a68c96a70534'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 06:34:09.603891 | orchestrator | 2026-02-02 06:34:09.603909 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 06:34:26.932595 | orchestrator | Monday 02 February 2026 06:34:09 +0000 (0:00:01.215) 1:00:37.171 ******* 2026-02-02 06:34:26.932701 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:26.932712 | orchestrator | 2026-02-02 06:34:26.932720 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 06:34:26.932727 | orchestrator | Monday 02 February 2026 06:34:10 +0000 (0:00:01.218) 1:00:38.390 ******* 2026-02-02 06:34:26.932733 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:26.932740 | orchestrator | 2026-02-02 06:34:26.932747 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 06:34:26.932754 | orchestrator | Monday 02 February 2026 06:34:12 +0000 (0:00:01.239) 1:00:39.629 ******* 2026-02-02 06:34:26.932760 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:26.932767 | orchestrator | 2026-02-02 06:34:26.932773 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 06:34:26.932779 | orchestrator | Monday 02 February 2026 06:34:13 +0000 (0:00:01.202) 1:00:40.832 ******* 2026-02-02 06:34:26.932786 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:34:26.932792 | orchestrator | 2026-02-02 06:34:26.932799 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:34:26.932805 | orchestrator | Monday 02 February 2026 06:34:15 +0000 (0:00:01.989) 1:00:42.821 ******* 2026-02-02 06:34:26.932811 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:26.932817 | orchestrator | 2026-02-02 06:34:26.932836 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 06:34:26.932842 | orchestrator | Monday 02 February 2026 06:34:16 +0000 (0:00:01.127) 1:00:43.949 ******* 2026-02-02 06:34:26.932849 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:26.932855 | orchestrator | 2026-02-02 06:34:26.932861 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 06:34:26.932867 | orchestrator | Monday 02 February 2026 06:34:17 +0000 (0:00:01.079) 1:00:45.028 ******* 2026-02-02 06:34:26.932874 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:26.932880 | orchestrator | 2026-02-02 06:34:26.932886 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:34:26.932892 | orchestrator | Monday 02 February 2026 06:34:18 +0000 (0:00:01.236) 1:00:46.265 ******* 2026-02-02 06:34:26.932898 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:26.932905 | orchestrator | 2026-02-02 06:34:26.932911 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 06:34:26.932917 | orchestrator | Monday 02 February 2026 06:34:19 +0000 (0:00:01.103) 1:00:47.369 ******* 2026-02-02 06:34:26.932923 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:26.932930 | orchestrator | 2026-02-02 06:34:26.932936 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 06:34:26.932942 | orchestrator | Monday 02 February 2026 06:34:20 +0000 (0:00:01.117) 1:00:48.486 ******* 2026-02-02 06:34:26.932948 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:26.932955 | orchestrator | 2026-02-02 06:34:26.932961 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 06:34:26.932968 | orchestrator | Monday 02 February 2026 06:34:22 +0000 (0:00:01.254) 1:00:49.740 ******* 2026-02-02 06:34:26.932974 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:26.932980 | orchestrator | 2026-02-02 06:34:26.932986 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 06:34:26.933008 | orchestrator | Monday 02 February 2026 06:34:23 +0000 (0:00:01.103) 1:00:50.844 ******* 2026-02-02 06:34:26.933015 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:26.933021 | orchestrator | 2026-02-02 06:34:26.933027 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 06:34:26.933033 | orchestrator | Monday 02 February 2026 06:34:24 +0000 (0:00:01.163) 1:00:52.008 ******* 2026-02-02 06:34:26.933040 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:26.933046 | orchestrator | 2026-02-02 06:34:26.933052 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 06:34:26.933059 | orchestrator | Monday 02 February 2026 06:34:25 +0000 (0:00:01.111) 1:00:53.119 ******* 2026-02-02 06:34:26.933065 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:26.933071 | orchestrator | 2026-02-02 06:34:26.933077 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 06:34:26.933083 | orchestrator | Monday 02 February 2026 06:34:26 +0000 (0:00:01.157) 1:00:54.276 ******* 2026-02-02 06:34:26.933091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:34:26.933102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a', 'dm-uuid-LVM-nQNI9mGSypmWJN7Kribh0RNL5qLQKFSceYxT4mfzBYfoYiha3ZzoEdYR0rTnnIvK'], 'uuids': ['a78e3f4b-723a-42a3-abd4-4d699a55c416'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK']}})  2026-02-02 06:34:26.933124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6', 'scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c15f901f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:34:26.933136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HOxmXw-N5cX-V1Nz-Lu3r-OQk9-N5gG-1syyTi', 'scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4', 'scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379']}})  2026-02-02 06:34:26.933144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:34:26.933156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:34:26.933165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 06:34:26.933173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:34:26.933181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO', 'dm-uuid-CRYPT-LUKS2-8edeb25f170042ba8e6d0505727d2968-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:34:26.933193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:34:28.247523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379', 'dm-uuid-LVM-2Xx1rXy8ZvvzVeymXUM2Y23jmTeKUn30gyH8a84MHrJn7bcz7phSu8LEA3bm3DqO'], 'uuids': ['8edeb25f-1700-42ba-8e6d-0505727d2968'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO']}})  2026-02-02 06:34:28.247747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yf6lEa-f3nO-iewk-DEDy-Fb6j-Kq2P-dbkgMf', 'scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc', 'scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a']}})  2026-02-02 06:34:28.247804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:34:28.247826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2944b273', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:34:28.247870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:34:28.247896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:34:28.247914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK', 'dm-uuid-CRYPT-LUKS2-a78e3f4b723a42a3abd44d699a55c416-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:34:28.247942 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:28.247959 | orchestrator | 2026-02-02 06:34:28.247975 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 06:34:28.247992 | orchestrator | Monday 02 February 2026 06:34:28 +0000 (0:00:01.322) 1:00:55.599 ******* 2026-02-02 06:34:28.248009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:28.248027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a', 'dm-uuid-LVM-nQNI9mGSypmWJN7Kribh0RNL5qLQKFSceYxT4mfzBYfoYiha3ZzoEdYR0rTnnIvK'], 'uuids': ['a78e3f4b-723a-42a3-abd4-4d699a55c416'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:28.248041 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6', 'scsi-SQEMU_QEMU_HARDDISK_c15f901f-7629-41e5-bfd5-e721d3f198c6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c15f901f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:28.248072 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HOxmXw-N5cX-V1Nz-Lu3r-OQk9-N5gG-1syyTi', 'scsi-0QEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4', 'scsi-SQEMU_QEMU_HARDDISK_5578c4aa-4507-4a80-9665-78072b9f11f4'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:29.502701 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:29.502816 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:29.502829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-21-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:29.502838 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:29.502846 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO', 'dm-uuid-CRYPT-LUKS2-8edeb25f170042ba8e6d0505727d2968-gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:29.502855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:29.502892 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b8f5a57--fc4d--5c4a--8869--764dca19b379-osd--block--2b8f5a57--fc4d--5c4a--8869--764dca19b379', 'dm-uuid-LVM-2Xx1rXy8ZvvzVeymXUM2Y23jmTeKUn30gyH8a84MHrJn7bcz7phSu8LEA3bm3DqO'], 'uuids': ['8edeb25f-1700-42ba-8e6d-0505727d2968'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5578c4aa', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['gyH8a8-4MHr-Jn7b-cz7p-hSu8-LEA3-bm3DqO']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:29.502910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yf6lEa-f3nO-iewk-DEDy-Fb6j-Kq2P-dbkgMf', 'scsi-0QEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc', 'scsi-SQEMU_QEMU_HARDDISK_1f26c814-af40-4046-ac8d-013998d956cc'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1f26c814', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--af42a967--eb71--546a--abb0--a5185990ed2a-osd--block--af42a967--eb71--546a--abb0--a5185990ed2a']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:29.502922 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:29.502943 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2944b273', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1', 'scsi-SQEMU_QEMU_HARDDISK_2944b273-4436-4bbb-8e69-1106f32efe58-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:57.268341 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:57.268462 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:57.268481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK', 'dm-uuid-CRYPT-LUKS2-a78e3f4b723a42a3abd44d699a55c416-eYxT4m-fzBY-foYi-ha3Z-zoEd-YR0r-TnnIvK'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:34:57.268495 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:57.268509 | orchestrator | 2026-02-02 06:34:57.268522 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 06:34:57.268534 | orchestrator | Monday 02 February 2026 06:34:29 +0000 (0:00:01.477) 1:00:57.076 ******* 2026-02-02 06:34:57.268545 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:57.268557 | orchestrator | 2026-02-02 06:34:57.268568 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 06:34:57.268579 | orchestrator | Monday 02 February 2026 06:34:30 +0000 (0:00:01.455) 1:00:58.532 ******* 2026-02-02 06:34:57.268590 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:57.268601 | orchestrator | 2026-02-02 06:34:57.268612 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:34:57.268623 | orchestrator | Monday 02 February 2026 06:34:32 +0000 (0:00:01.134) 1:00:59.666 ******* 2026-02-02 06:34:57.268685 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:57.268696 | orchestrator | 2026-02-02 06:34:57.268707 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:34:57.268718 | orchestrator | Monday 02 February 2026 06:34:33 +0000 (0:00:01.420) 1:01:01.087 ******* 2026-02-02 06:34:57.268729 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:57.268762 | orchestrator | 2026-02-02 06:34:57.268773 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:34:57.268784 | orchestrator | Monday 02 February 2026 06:34:34 +0000 (0:00:01.118) 1:01:02.205 ******* 2026-02-02 06:34:57.268795 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:57.268806 | orchestrator | 2026-02-02 06:34:57.268816 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:34:57.268827 | orchestrator | Monday 02 February 2026 06:34:35 +0000 (0:00:01.269) 1:01:03.475 ******* 2026-02-02 06:34:57.268838 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:57.268848 | orchestrator | 2026-02-02 06:34:57.268859 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 06:34:57.268870 | orchestrator | Monday 02 February 2026 06:34:37 +0000 (0:00:01.147) 1:01:04.622 ******* 2026-02-02 06:34:57.268881 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-02 06:34:57.268893 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-02 06:34:57.268917 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-02 06:34:57.268928 | orchestrator | 2026-02-02 06:34:57.268939 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 06:34:57.268949 | orchestrator | Monday 02 February 2026 06:34:38 +0000 (0:00:01.655) 1:01:06.278 ******* 2026-02-02 06:34:57.268960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-02 06:34:57.268971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-02 06:34:57.268982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-02 06:34:57.268993 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:57.269003 | orchestrator | 2026-02-02 06:34:57.269014 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 06:34:57.269025 | orchestrator | Monday 02 February 2026 06:34:39 +0000 (0:00:01.140) 1:01:07.418 ******* 2026-02-02 06:34:57.269051 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-02 06:34:57.269064 | orchestrator | 2026-02-02 06:34:57.269075 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:34:57.269087 | orchestrator | Monday 02 February 2026 06:34:40 +0000 (0:00:01.146) 1:01:08.564 ******* 2026-02-02 06:34:57.269098 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:57.269109 | orchestrator | 2026-02-02 06:34:57.269120 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:34:57.269131 | orchestrator | Monday 02 February 2026 06:34:42 +0000 (0:00:01.149) 1:01:09.713 ******* 2026-02-02 06:34:57.269141 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:57.269152 | orchestrator | 2026-02-02 06:34:57.269166 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:34:57.269185 | orchestrator | Monday 02 February 2026 06:34:43 +0000 (0:00:01.143) 1:01:10.857 ******* 2026-02-02 06:34:57.269203 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:57.269221 | orchestrator | 2026-02-02 06:34:57.269239 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:34:57.269256 | orchestrator | Monday 02 February 2026 06:34:44 +0000 (0:00:01.249) 1:01:12.106 ******* 2026-02-02 06:34:57.269274 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:57.269290 | orchestrator | 2026-02-02 06:34:57.269308 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:34:57.269327 | orchestrator | Monday 02 February 2026 06:34:45 +0000 (0:00:01.271) 1:01:13.378 ******* 2026-02-02 06:34:57.269345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:34:57.269362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:34:57.269379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:34:57.269396 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:57.269413 | orchestrator | 2026-02-02 06:34:57.269445 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:34:57.269465 | orchestrator | Monday 02 February 2026 06:34:47 +0000 (0:00:01.402) 1:01:14.781 ******* 2026-02-02 06:34:57.269483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:34:57.269500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:34:57.269519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:34:57.269537 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:57.269556 | orchestrator | 2026-02-02 06:34:57.269573 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:34:57.269591 | orchestrator | Monday 02 February 2026 06:34:48 +0000 (0:00:01.387) 1:01:16.168 ******* 2026-02-02 06:34:57.269609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:34:57.269655 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:34:57.269677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:34:57.269696 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:34:57.269715 | orchestrator | 2026-02-02 06:34:57.269734 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:34:57.269752 | orchestrator | Monday 02 February 2026 06:34:50 +0000 (0:00:01.448) 1:01:17.616 ******* 2026-02-02 06:34:57.269771 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:34:57.269790 | orchestrator | 2026-02-02 06:34:57.269808 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:34:57.269826 | orchestrator | Monday 02 February 2026 06:34:51 +0000 (0:00:01.146) 1:01:18.763 ******* 2026-02-02 06:34:57.269846 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 06:34:57.269864 | orchestrator | 2026-02-02 06:34:57.269883 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 06:34:57.269902 | orchestrator | Monday 02 February 2026 06:34:52 +0000 (0:00:01.344) 1:01:20.107 ******* 2026-02-02 06:34:57.269921 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:34:57.269939 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:34:57.269958 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:34:57.269977 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-02 06:34:57.269995 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:34:57.270013 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:34:57.270103 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:34:57.270123 | orchestrator | 2026-02-02 06:34:57.270142 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 06:34:57.270162 | orchestrator | Monday 02 February 2026 06:34:54 +0000 (0:00:02.122) 1:01:22.229 ******* 2026-02-02 06:34:57.270193 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:34:57.270214 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:34:57.270235 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:34:57.270256 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-02 06:34:57.270276 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:34:57.270296 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:34:57.270315 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:34:57.270335 | orchestrator | 2026-02-02 06:34:57.270370 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-02 06:35:49.733419 | orchestrator | Monday 02 February 2026 06:34:57 +0000 (0:00:02.600) 1:01:24.830 ******* 2026-02-02 06:35:49.733540 | orchestrator | changed: [testbed-node-3] 2026-02-02 06:35:49.733553 | orchestrator | 2026-02-02 06:35:49.733564 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-02 06:35:49.733574 | orchestrator | Monday 02 February 2026 06:34:59 +0000 (0:00:02.275) 1:01:27.105 ******* 2026-02-02 06:35:49.733583 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 06:35:49.733593 | orchestrator | 2026-02-02 06:35:49.733602 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-02 06:35:49.733611 | orchestrator | Monday 02 February 2026 06:35:02 +0000 (0:00:02.751) 1:01:29.857 ******* 2026-02-02 06:35:49.733620 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 06:35:49.733629 | orchestrator | 2026-02-02 06:35:49.733637 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 06:35:49.733704 | orchestrator | Monday 02 February 2026 06:35:04 +0000 (0:00:02.350) 1:01:32.207 ******* 2026-02-02 06:35:49.733714 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-02 06:35:49.733723 | orchestrator | 2026-02-02 06:35:49.733732 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 06:35:49.733741 | orchestrator | Monday 02 February 2026 06:35:05 +0000 (0:00:01.258) 1:01:33.465 ******* 2026-02-02 06:35:49.733749 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-02 06:35:49.733758 | orchestrator | 2026-02-02 06:35:49.733766 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 06:35:49.733775 | orchestrator | Monday 02 February 2026 06:35:07 +0000 (0:00:01.189) 1:01:34.655 ******* 2026-02-02 06:35:49.733784 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.733792 | orchestrator | 2026-02-02 06:35:49.733801 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 06:35:49.733809 | orchestrator | Monday 02 February 2026 06:35:08 +0000 (0:00:01.111) 1:01:35.767 ******* 2026-02-02 06:35:49.733818 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:35:49.733828 | orchestrator | 2026-02-02 06:35:49.733836 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 06:35:49.733845 | orchestrator | Monday 02 February 2026 06:35:09 +0000 (0:00:01.494) 1:01:37.261 ******* 2026-02-02 06:35:49.733853 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:35:49.733862 | orchestrator | 2026-02-02 06:35:49.733871 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 06:35:49.733880 | orchestrator | Monday 02 February 2026 06:35:11 +0000 (0:00:01.477) 1:01:38.739 ******* 2026-02-02 06:35:49.733888 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:35:49.733897 | orchestrator | 2026-02-02 06:35:49.733905 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 06:35:49.733914 | orchestrator | Monday 02 February 2026 06:35:12 +0000 (0:00:01.501) 1:01:40.241 ******* 2026-02-02 06:35:49.733922 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.733931 | orchestrator | 2026-02-02 06:35:49.733940 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 06:35:49.733948 | orchestrator | Monday 02 February 2026 06:35:13 +0000 (0:00:01.113) 1:01:41.354 ******* 2026-02-02 06:35:49.733957 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.733966 | orchestrator | 2026-02-02 06:35:49.733975 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 06:35:49.733985 | orchestrator | Monday 02 February 2026 06:35:14 +0000 (0:00:01.154) 1:01:42.509 ******* 2026-02-02 06:35:49.733995 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734006 | orchestrator | 2026-02-02 06:35:49.734059 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 06:35:49.734070 | orchestrator | Monday 02 February 2026 06:35:16 +0000 (0:00:01.105) 1:01:43.615 ******* 2026-02-02 06:35:49.734087 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:35:49.734098 | orchestrator | 2026-02-02 06:35:49.734108 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 06:35:49.734118 | orchestrator | Monday 02 February 2026 06:35:17 +0000 (0:00:01.518) 1:01:45.134 ******* 2026-02-02 06:35:49.734129 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:35:49.734139 | orchestrator | 2026-02-02 06:35:49.734149 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 06:35:49.734159 | orchestrator | Monday 02 February 2026 06:35:19 +0000 (0:00:01.524) 1:01:46.658 ******* 2026-02-02 06:35:49.734169 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734179 | orchestrator | 2026-02-02 06:35:49.734189 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 06:35:49.734199 | orchestrator | Monday 02 February 2026 06:35:20 +0000 (0:00:01.187) 1:01:47.846 ******* 2026-02-02 06:35:49.734210 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734220 | orchestrator | 2026-02-02 06:35:49.734244 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 06:35:49.734255 | orchestrator | Monday 02 February 2026 06:35:21 +0000 (0:00:01.139) 1:01:48.985 ******* 2026-02-02 06:35:49.734265 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:35:49.734275 | orchestrator | 2026-02-02 06:35:49.734285 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 06:35:49.734295 | orchestrator | Monday 02 February 2026 06:35:22 +0000 (0:00:01.113) 1:01:50.099 ******* 2026-02-02 06:35:49.734304 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:35:49.734315 | orchestrator | 2026-02-02 06:35:49.734325 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 06:35:49.734333 | orchestrator | Monday 02 February 2026 06:35:23 +0000 (0:00:01.138) 1:01:51.238 ******* 2026-02-02 06:35:49.734342 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:35:49.734350 | orchestrator | 2026-02-02 06:35:49.734374 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 06:35:49.734383 | orchestrator | Monday 02 February 2026 06:35:24 +0000 (0:00:01.128) 1:01:52.366 ******* 2026-02-02 06:35:49.734392 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734400 | orchestrator | 2026-02-02 06:35:49.734409 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 06:35:49.734417 | orchestrator | Monday 02 February 2026 06:35:25 +0000 (0:00:01.132) 1:01:53.499 ******* 2026-02-02 06:35:49.734426 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734435 | orchestrator | 2026-02-02 06:35:49.734443 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 06:35:49.734452 | orchestrator | Monday 02 February 2026 06:35:27 +0000 (0:00:01.108) 1:01:54.608 ******* 2026-02-02 06:35:49.734460 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734469 | orchestrator | 2026-02-02 06:35:49.734477 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 06:35:49.734486 | orchestrator | Monday 02 February 2026 06:35:28 +0000 (0:00:01.120) 1:01:55.728 ******* 2026-02-02 06:35:49.734494 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:35:49.734503 | orchestrator | 2026-02-02 06:35:49.734512 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 06:35:49.734520 | orchestrator | Monday 02 February 2026 06:35:29 +0000 (0:00:01.320) 1:01:57.049 ******* 2026-02-02 06:35:49.734529 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:35:49.734537 | orchestrator | 2026-02-02 06:35:49.734546 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 06:35:49.734554 | orchestrator | Monday 02 February 2026 06:35:30 +0000 (0:00:01.178) 1:01:58.227 ******* 2026-02-02 06:35:49.734563 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734571 | orchestrator | 2026-02-02 06:35:49.734580 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 06:35:49.734589 | orchestrator | Monday 02 February 2026 06:35:31 +0000 (0:00:01.139) 1:01:59.366 ******* 2026-02-02 06:35:49.734603 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734612 | orchestrator | 2026-02-02 06:35:49.734620 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 06:35:49.734629 | orchestrator | Monday 02 February 2026 06:35:32 +0000 (0:00:01.141) 1:02:00.508 ******* 2026-02-02 06:35:49.734637 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734662 | orchestrator | 2026-02-02 06:35:49.734671 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 06:35:49.734680 | orchestrator | Monday 02 February 2026 06:35:34 +0000 (0:00:01.123) 1:02:01.632 ******* 2026-02-02 06:35:49.734688 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734697 | orchestrator | 2026-02-02 06:35:49.734705 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 06:35:49.734714 | orchestrator | Monday 02 February 2026 06:35:35 +0000 (0:00:01.203) 1:02:02.835 ******* 2026-02-02 06:35:49.734723 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734731 | orchestrator | 2026-02-02 06:35:49.734740 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 06:35:49.734748 | orchestrator | Monday 02 February 2026 06:35:36 +0000 (0:00:01.130) 1:02:03.965 ******* 2026-02-02 06:35:49.734757 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734765 | orchestrator | 2026-02-02 06:35:49.734774 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 06:35:49.734782 | orchestrator | Monday 02 February 2026 06:35:37 +0000 (0:00:01.099) 1:02:05.065 ******* 2026-02-02 06:35:49.734791 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734799 | orchestrator | 2026-02-02 06:35:49.734808 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 06:35:49.734817 | orchestrator | Monday 02 February 2026 06:35:38 +0000 (0:00:01.158) 1:02:06.223 ******* 2026-02-02 06:35:49.734826 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734834 | orchestrator | 2026-02-02 06:35:49.734843 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 06:35:49.734851 | orchestrator | Monday 02 February 2026 06:35:39 +0000 (0:00:01.105) 1:02:07.329 ******* 2026-02-02 06:35:49.734860 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734868 | orchestrator | 2026-02-02 06:35:49.734877 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 06:35:49.734885 | orchestrator | Monday 02 February 2026 06:35:40 +0000 (0:00:01.134) 1:02:08.464 ******* 2026-02-02 06:35:49.734894 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734902 | orchestrator | 2026-02-02 06:35:49.734911 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 06:35:49.734919 | orchestrator | Monday 02 February 2026 06:35:42 +0000 (0:00:01.122) 1:02:09.586 ******* 2026-02-02 06:35:49.734928 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734936 | orchestrator | 2026-02-02 06:35:49.734945 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 06:35:49.734953 | orchestrator | Monday 02 February 2026 06:35:43 +0000 (0:00:01.169) 1:02:10.756 ******* 2026-02-02 06:35:49.734962 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:35:49.734970 | orchestrator | 2026-02-02 06:35:49.734979 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 06:35:49.734992 | orchestrator | Monday 02 February 2026 06:35:44 +0000 (0:00:01.106) 1:02:11.863 ******* 2026-02-02 06:35:49.735001 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:35:49.735010 | orchestrator | 2026-02-02 06:35:49.735018 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 06:35:49.735027 | orchestrator | Monday 02 February 2026 06:35:46 +0000 (0:00:02.033) 1:02:13.896 ******* 2026-02-02 06:35:49.735036 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:35:49.735044 | orchestrator | 2026-02-02 06:35:49.735053 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 06:35:49.735061 | orchestrator | Monday 02 February 2026 06:35:48 +0000 (0:00:02.158) 1:02:16.055 ******* 2026-02-02 06:35:49.735078 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-02 06:35:49.735087 | orchestrator | 2026-02-02 06:35:49.735095 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 06:35:49.735109 | orchestrator | Monday 02 February 2026 06:35:49 +0000 (0:00:01.248) 1:02:17.303 ******* 2026-02-02 06:36:36.070340 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.070462 | orchestrator | 2026-02-02 06:36:36.070479 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 06:36:36.070492 | orchestrator | Monday 02 February 2026 06:35:50 +0000 (0:00:01.159) 1:02:18.463 ******* 2026-02-02 06:36:36.070502 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.070512 | orchestrator | 2026-02-02 06:36:36.070522 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 06:36:36.070532 | orchestrator | Monday 02 February 2026 06:35:52 +0000 (0:00:01.135) 1:02:19.598 ******* 2026-02-02 06:36:36.070541 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 06:36:36.070551 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 06:36:36.070561 | orchestrator | 2026-02-02 06:36:36.070571 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 06:36:36.070581 | orchestrator | Monday 02 February 2026 06:35:53 +0000 (0:00:01.823) 1:02:21.421 ******* 2026-02-02 06:36:36.070590 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:36:36.070603 | orchestrator | 2026-02-02 06:36:36.070614 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 06:36:36.070626 | orchestrator | Monday 02 February 2026 06:35:55 +0000 (0:00:01.505) 1:02:22.927 ******* 2026-02-02 06:36:36.070636 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.070647 | orchestrator | 2026-02-02 06:36:36.070690 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 06:36:36.070704 | orchestrator | Monday 02 February 2026 06:35:56 +0000 (0:00:01.158) 1:02:24.086 ******* 2026-02-02 06:36:36.070715 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.070726 | orchestrator | 2026-02-02 06:36:36.070737 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 06:36:36.070748 | orchestrator | Monday 02 February 2026 06:35:57 +0000 (0:00:01.162) 1:02:25.248 ******* 2026-02-02 06:36:36.070759 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.070770 | orchestrator | 2026-02-02 06:36:36.070781 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 06:36:36.070792 | orchestrator | Monday 02 February 2026 06:35:58 +0000 (0:00:01.121) 1:02:26.370 ******* 2026-02-02 06:36:36.070803 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-02 06:36:36.070815 | orchestrator | 2026-02-02 06:36:36.070825 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 06:36:36.070836 | orchestrator | Monday 02 February 2026 06:35:59 +0000 (0:00:01.124) 1:02:27.494 ******* 2026-02-02 06:36:36.070847 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:36:36.070858 | orchestrator | 2026-02-02 06:36:36.070869 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 06:36:36.070880 | orchestrator | Monday 02 February 2026 06:36:01 +0000 (0:00:01.685) 1:02:29.180 ******* 2026-02-02 06:36:36.070891 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 06:36:36.070902 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 06:36:36.070913 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 06:36:36.070924 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.070934 | orchestrator | 2026-02-02 06:36:36.070946 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 06:36:36.070989 | orchestrator | Monday 02 February 2026 06:36:02 +0000 (0:00:01.129) 1:02:30.310 ******* 2026-02-02 06:36:36.071001 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.071012 | orchestrator | 2026-02-02 06:36:36.071022 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 06:36:36.071033 | orchestrator | Monday 02 February 2026 06:36:03 +0000 (0:00:01.119) 1:02:31.429 ******* 2026-02-02 06:36:36.071044 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.071055 | orchestrator | 2026-02-02 06:36:36.071065 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 06:36:36.071076 | orchestrator | Monday 02 February 2026 06:36:05 +0000 (0:00:01.249) 1:02:32.678 ******* 2026-02-02 06:36:36.071087 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.071098 | orchestrator | 2026-02-02 06:36:36.071108 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 06:36:36.071119 | orchestrator | Monday 02 February 2026 06:36:06 +0000 (0:00:01.184) 1:02:33.863 ******* 2026-02-02 06:36:36.071130 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.071140 | orchestrator | 2026-02-02 06:36:36.071151 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 06:36:36.071162 | orchestrator | Monday 02 February 2026 06:36:07 +0000 (0:00:01.147) 1:02:35.011 ******* 2026-02-02 06:36:36.071173 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.071183 | orchestrator | 2026-02-02 06:36:36.071194 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 06:36:36.071220 | orchestrator | Monday 02 February 2026 06:36:08 +0000 (0:00:01.135) 1:02:36.147 ******* 2026-02-02 06:36:36.071231 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:36:36.071242 | orchestrator | 2026-02-02 06:36:36.071253 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 06:36:36.071264 | orchestrator | Monday 02 February 2026 06:36:11 +0000 (0:00:02.497) 1:02:38.644 ******* 2026-02-02 06:36:36.071276 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:36:36.071294 | orchestrator | 2026-02-02 06:36:36.071312 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 06:36:36.071331 | orchestrator | Monday 02 February 2026 06:36:12 +0000 (0:00:01.147) 1:02:39.791 ******* 2026-02-02 06:36:36.071350 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-02 06:36:36.071367 | orchestrator | 2026-02-02 06:36:36.071386 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 06:36:36.071431 | orchestrator | Monday 02 February 2026 06:36:13 +0000 (0:00:01.110) 1:02:40.901 ******* 2026-02-02 06:36:36.071451 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.071469 | orchestrator | 2026-02-02 06:36:36.071487 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 06:36:36.071505 | orchestrator | Monday 02 February 2026 06:36:14 +0000 (0:00:01.145) 1:02:42.047 ******* 2026-02-02 06:36:36.071524 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.071543 | orchestrator | 2026-02-02 06:36:36.071562 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 06:36:36.071581 | orchestrator | Monday 02 February 2026 06:36:15 +0000 (0:00:01.135) 1:02:43.183 ******* 2026-02-02 06:36:36.071600 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.071618 | orchestrator | 2026-02-02 06:36:36.071637 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 06:36:36.071657 | orchestrator | Monday 02 February 2026 06:36:16 +0000 (0:00:01.152) 1:02:44.336 ******* 2026-02-02 06:36:36.071732 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.071750 | orchestrator | 2026-02-02 06:36:36.071768 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 06:36:36.071780 | orchestrator | Monday 02 February 2026 06:36:17 +0000 (0:00:01.155) 1:02:45.492 ******* 2026-02-02 06:36:36.071790 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.071801 | orchestrator | 2026-02-02 06:36:36.071812 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 06:36:36.071836 | orchestrator | Monday 02 February 2026 06:36:19 +0000 (0:00:01.121) 1:02:46.613 ******* 2026-02-02 06:36:36.071846 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.071857 | orchestrator | 2026-02-02 06:36:36.071868 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 06:36:36.071879 | orchestrator | Monday 02 February 2026 06:36:20 +0000 (0:00:01.253) 1:02:47.867 ******* 2026-02-02 06:36:36.071890 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.071901 | orchestrator | 2026-02-02 06:36:36.071911 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 06:36:36.071922 | orchestrator | Monday 02 February 2026 06:36:21 +0000 (0:00:01.134) 1:02:49.002 ******* 2026-02-02 06:36:36.071933 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:36:36.071944 | orchestrator | 2026-02-02 06:36:36.071955 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 06:36:36.071966 | orchestrator | Monday 02 February 2026 06:36:22 +0000 (0:00:01.190) 1:02:50.193 ******* 2026-02-02 06:36:36.071977 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:36:36.071987 | orchestrator | 2026-02-02 06:36:36.071998 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 06:36:36.072009 | orchestrator | Monday 02 February 2026 06:36:23 +0000 (0:00:01.131) 1:02:51.325 ******* 2026-02-02 06:36:36.072020 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-02 06:36:36.072031 | orchestrator | 2026-02-02 06:36:36.072042 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 06:36:36.072053 | orchestrator | Monday 02 February 2026 06:36:24 +0000 (0:00:01.171) 1:02:52.496 ******* 2026-02-02 06:36:36.072063 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-02 06:36:36.072074 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-02 06:36:36.072085 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-02 06:36:36.072096 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-02 06:36:36.072106 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-02 06:36:36.072117 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-02 06:36:36.072128 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-02 06:36:36.072138 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-02 06:36:36.072150 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 06:36:36.072160 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 06:36:36.072171 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 06:36:36.072182 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 06:36:36.072192 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 06:36:36.072204 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 06:36:36.072214 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-02 06:36:36.072225 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-02 06:36:36.072236 | orchestrator | 2026-02-02 06:36:36.072246 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 06:36:36.072257 | orchestrator | Monday 02 February 2026 06:36:31 +0000 (0:00:06.476) 1:02:58.973 ******* 2026-02-02 06:36:36.072268 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-02 06:36:36.072279 | orchestrator | 2026-02-02 06:36:36.072298 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-02 06:36:36.072309 | orchestrator | Monday 02 February 2026 06:36:32 +0000 (0:00:01.113) 1:03:00.086 ******* 2026-02-02 06:36:36.072320 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 06:36:36.072339 | orchestrator | 2026-02-02 06:36:36.072351 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-02 06:36:36.072361 | orchestrator | Monday 02 February 2026 06:36:34 +0000 (0:00:01.550) 1:03:01.636 ******* 2026-02-02 06:36:36.072372 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 06:36:36.072383 | orchestrator | 2026-02-02 06:36:36.072394 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 06:36:36.072415 | orchestrator | Monday 02 February 2026 06:36:36 +0000 (0:00:02.001) 1:03:03.638 ******* 2026-02-02 06:37:25.874826 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.874941 | orchestrator | 2026-02-02 06:37:25.874959 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 06:37:25.874972 | orchestrator | Monday 02 February 2026 06:36:37 +0000 (0:00:01.135) 1:03:04.774 ******* 2026-02-02 06:37:25.874984 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.874995 | orchestrator | 2026-02-02 06:37:25.875006 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 06:37:25.875017 | orchestrator | Monday 02 February 2026 06:36:38 +0000 (0:00:01.104) 1:03:05.878 ******* 2026-02-02 06:37:25.875027 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875038 | orchestrator | 2026-02-02 06:37:25.875048 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 06:37:25.875059 | orchestrator | Monday 02 February 2026 06:36:39 +0000 (0:00:01.255) 1:03:07.134 ******* 2026-02-02 06:37:25.875069 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875080 | orchestrator | 2026-02-02 06:37:25.875091 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 06:37:25.875102 | orchestrator | Monday 02 February 2026 06:36:40 +0000 (0:00:01.125) 1:03:08.260 ******* 2026-02-02 06:37:25.875112 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875123 | orchestrator | 2026-02-02 06:37:25.875133 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 06:37:25.875145 | orchestrator | Monday 02 February 2026 06:36:41 +0000 (0:00:01.137) 1:03:09.398 ******* 2026-02-02 06:37:25.875156 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875167 | orchestrator | 2026-02-02 06:37:25.875178 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 06:37:25.875188 | orchestrator | Monday 02 February 2026 06:36:42 +0000 (0:00:01.130) 1:03:10.528 ******* 2026-02-02 06:37:25.875199 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875209 | orchestrator | 2026-02-02 06:37:25.875220 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 06:37:25.875231 | orchestrator | Monday 02 February 2026 06:36:44 +0000 (0:00:01.121) 1:03:11.650 ******* 2026-02-02 06:37:25.875242 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875253 | orchestrator | 2026-02-02 06:37:25.875264 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 06:37:25.875275 | orchestrator | Monday 02 February 2026 06:36:45 +0000 (0:00:01.121) 1:03:12.772 ******* 2026-02-02 06:37:25.875285 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875298 | orchestrator | 2026-02-02 06:37:25.875311 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 06:37:25.875325 | orchestrator | Monday 02 February 2026 06:36:46 +0000 (0:00:01.177) 1:03:13.950 ******* 2026-02-02 06:37:25.875337 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875350 | orchestrator | 2026-02-02 06:37:25.875363 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 06:37:25.875375 | orchestrator | Monday 02 February 2026 06:36:47 +0000 (0:00:01.149) 1:03:15.099 ******* 2026-02-02 06:37:25.875388 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875400 | orchestrator | 2026-02-02 06:37:25.875412 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 06:37:25.875447 | orchestrator | Monday 02 February 2026 06:36:48 +0000 (0:00:01.153) 1:03:16.253 ******* 2026-02-02 06:37:25.875461 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-02 06:37:25.875473 | orchestrator | 2026-02-02 06:37:25.875484 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 06:37:25.875495 | orchestrator | Monday 02 February 2026 06:36:53 +0000 (0:00:04.416) 1:03:20.669 ******* 2026-02-02 06:37:25.875506 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 06:37:25.875517 | orchestrator | 2026-02-02 06:37:25.875528 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 06:37:25.875539 | orchestrator | Monday 02 February 2026 06:36:54 +0000 (0:00:01.215) 1:03:21.884 ******* 2026-02-02 06:37:25.875551 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-02 06:37:25.875580 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-02 06:37:25.875592 | orchestrator | 2026-02-02 06:37:25.875603 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 06:37:25.875614 | orchestrator | Monday 02 February 2026 06:36:59 +0000 (0:00:04.762) 1:03:26.647 ******* 2026-02-02 06:37:25.875624 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875635 | orchestrator | 2026-02-02 06:37:25.875646 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 06:37:25.875656 | orchestrator | Monday 02 February 2026 06:37:00 +0000 (0:00:01.129) 1:03:27.776 ******* 2026-02-02 06:37:25.875667 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875702 | orchestrator | 2026-02-02 06:37:25.875713 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:37:25.875742 | orchestrator | Monday 02 February 2026 06:37:01 +0000 (0:00:01.207) 1:03:28.984 ******* 2026-02-02 06:37:25.875754 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875765 | orchestrator | 2026-02-02 06:37:25.875775 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:37:25.875786 | orchestrator | Monday 02 February 2026 06:37:02 +0000 (0:00:01.185) 1:03:30.169 ******* 2026-02-02 06:37:25.875806 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875825 | orchestrator | 2026-02-02 06:37:25.875845 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:37:25.875864 | orchestrator | Monday 02 February 2026 06:37:03 +0000 (0:00:01.141) 1:03:31.311 ******* 2026-02-02 06:37:25.875883 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.875904 | orchestrator | 2026-02-02 06:37:25.875917 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:37:25.875927 | orchestrator | Monday 02 February 2026 06:37:04 +0000 (0:00:01.146) 1:03:32.458 ******* 2026-02-02 06:37:25.875938 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:37:25.875949 | orchestrator | 2026-02-02 06:37:25.875960 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:37:25.875970 | orchestrator | Monday 02 February 2026 06:37:06 +0000 (0:00:01.307) 1:03:33.766 ******* 2026-02-02 06:37:25.875981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:37:25.875992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:37:25.876002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:37:25.876024 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.876035 | orchestrator | 2026-02-02 06:37:25.876045 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:37:25.876056 | orchestrator | Monday 02 February 2026 06:37:07 +0000 (0:00:01.404) 1:03:35.170 ******* 2026-02-02 06:37:25.876067 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:37:25.876077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:37:25.876088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:37:25.876098 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.876109 | orchestrator | 2026-02-02 06:37:25.876120 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:37:25.876130 | orchestrator | Monday 02 February 2026 06:37:08 +0000 (0:00:01.388) 1:03:36.558 ******* 2026-02-02 06:37:25.876141 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-02 06:37:25.876151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-02 06:37:25.876161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-02 06:37:25.876172 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.876182 | orchestrator | 2026-02-02 06:37:25.876193 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:37:25.876203 | orchestrator | Monday 02 February 2026 06:37:10 +0000 (0:00:01.494) 1:03:38.052 ******* 2026-02-02 06:37:25.876214 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:37:25.876225 | orchestrator | 2026-02-02 06:37:25.876235 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:37:25.876246 | orchestrator | Monday 02 February 2026 06:37:11 +0000 (0:00:01.202) 1:03:39.255 ******* 2026-02-02 06:37:25.876256 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-02 06:37:25.876267 | orchestrator | 2026-02-02 06:37:25.876277 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 06:37:25.876288 | orchestrator | Monday 02 February 2026 06:37:13 +0000 (0:00:01.374) 1:03:40.629 ******* 2026-02-02 06:37:25.876298 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:37:25.876309 | orchestrator | 2026-02-02 06:37:25.876320 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-02 06:37:25.876330 | orchestrator | Monday 02 February 2026 06:37:14 +0000 (0:00:01.746) 1:03:42.376 ******* 2026-02-02 06:37:25.876341 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-02-02 06:37:25.876351 | orchestrator | 2026-02-02 06:37:25.876362 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-02 06:37:25.876372 | orchestrator | Monday 02 February 2026 06:37:16 +0000 (0:00:01.620) 1:03:43.997 ******* 2026-02-02 06:37:25.876383 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:37:25.876394 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 06:37:25.876404 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 06:37:25.876415 | orchestrator | 2026-02-02 06:37:25.876426 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:37:25.876436 | orchestrator | Monday 02 February 2026 06:37:19 +0000 (0:00:03.315) 1:03:47.312 ******* 2026-02-02 06:37:25.876447 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-02 06:37:25.876457 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-02 06:37:25.876474 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:37:25.876485 | orchestrator | 2026-02-02 06:37:25.876496 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-02 06:37:25.876506 | orchestrator | Monday 02 February 2026 06:37:21 +0000 (0:00:01.924) 1:03:49.237 ******* 2026-02-02 06:37:25.876516 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:37:25.876527 | orchestrator | 2026-02-02 06:37:25.876538 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-02 06:37:25.876548 | orchestrator | Monday 02 February 2026 06:37:22 +0000 (0:00:01.111) 1:03:50.348 ******* 2026-02-02 06:37:25.876565 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-02-02 06:37:25.876576 | orchestrator | 2026-02-02 06:37:25.876587 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-02 06:37:25.876598 | orchestrator | Monday 02 February 2026 06:37:24 +0000 (0:00:01.482) 1:03:51.831 ******* 2026-02-02 06:37:25.876617 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 06:38:40.184691 | orchestrator | 2026-02-02 06:38:40.184802 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-02 06:38:40.184810 | orchestrator | Monday 02 February 2026 06:37:25 +0000 (0:00:01.615) 1:03:53.446 ******* 2026-02-02 06:38:40.184816 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:38:40.184822 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-02 06:38:40.184827 | orchestrator | 2026-02-02 06:38:40.184832 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-02 06:38:40.184836 | orchestrator | Monday 02 February 2026 06:37:30 +0000 (0:00:05.112) 1:03:58.559 ******* 2026-02-02 06:38:40.184841 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:38:40.184846 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 06:38:40.184850 | orchestrator | 2026-02-02 06:38:40.184854 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:38:40.184859 | orchestrator | Monday 02 February 2026 06:37:34 +0000 (0:00:03.168) 1:04:01.727 ******* 2026-02-02 06:38:40.184863 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-02 06:38:40.184868 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:38:40.184873 | orchestrator | 2026-02-02 06:38:40.184877 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-02 06:38:40.184882 | orchestrator | Monday 02 February 2026 06:37:36 +0000 (0:00:01.971) 1:04:03.698 ******* 2026-02-02 06:38:40.184886 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-02 06:38:40.184891 | orchestrator | 2026-02-02 06:38:40.184895 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-02 06:38:40.184899 | orchestrator | Monday 02 February 2026 06:37:37 +0000 (0:00:01.534) 1:04:05.233 ******* 2026-02-02 06:38:40.184904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:38:40.184909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:38:40.184913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:38:40.184918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:38:40.184922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:38:40.184926 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:38:40.184931 | orchestrator | 2026-02-02 06:38:40.184935 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-02 06:38:40.184939 | orchestrator | Monday 02 February 2026 06:37:39 +0000 (0:00:01.955) 1:04:07.189 ******* 2026-02-02 06:38:40.184944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:38:40.184948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:38:40.184968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:38:40.184973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:38:40.184977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:38:40.184981 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:38:40.184986 | orchestrator | 2026-02-02 06:38:40.184990 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-02 06:38:40.184996 | orchestrator | Monday 02 February 2026 06:37:41 +0000 (0:00:01.785) 1:04:08.975 ******* 2026-02-02 06:38:40.185013 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:38:40.185020 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:38:40.185026 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:38:40.185032 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:38:40.185039 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:38:40.185045 | orchestrator | 2026-02-02 06:38:40.185051 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-02 06:38:40.185067 | orchestrator | Monday 02 February 2026 06:38:12 +0000 (0:00:31.186) 1:04:40.162 ******* 2026-02-02 06:38:40.185074 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:38:40.185080 | orchestrator | 2026-02-02 06:38:40.185085 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-02 06:38:40.185091 | orchestrator | Monday 02 February 2026 06:38:13 +0000 (0:00:01.128) 1:04:41.291 ******* 2026-02-02 06:38:40.185097 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:38:40.185102 | orchestrator | 2026-02-02 06:38:40.185108 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-02 06:38:40.185114 | orchestrator | Monday 02 February 2026 06:38:14 +0000 (0:00:01.134) 1:04:42.425 ******* 2026-02-02 06:38:40.185120 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-02-02 06:38:40.185125 | orchestrator | 2026-02-02 06:38:40.185131 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-02 06:38:40.185137 | orchestrator | Monday 02 February 2026 06:38:16 +0000 (0:00:01.521) 1:04:43.947 ******* 2026-02-02 06:38:40.185143 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-02-02 06:38:40.185148 | orchestrator | 2026-02-02 06:38:40.185154 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-02 06:38:40.185160 | orchestrator | Monday 02 February 2026 06:38:17 +0000 (0:00:01.447) 1:04:45.394 ******* 2026-02-02 06:38:40.185166 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:38:40.185172 | orchestrator | 2026-02-02 06:38:40.185178 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-02 06:38:40.185183 | orchestrator | Monday 02 February 2026 06:38:19 +0000 (0:00:02.040) 1:04:47.435 ******* 2026-02-02 06:38:40.185189 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:38:40.185195 | orchestrator | 2026-02-02 06:38:40.185201 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-02 06:38:40.185206 | orchestrator | Monday 02 February 2026 06:38:21 +0000 (0:00:01.991) 1:04:49.427 ******* 2026-02-02 06:38:40.185212 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:38:40.185223 | orchestrator | 2026-02-02 06:38:40.185228 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-02 06:38:40.185234 | orchestrator | Monday 02 February 2026 06:38:24 +0000 (0:00:02.237) 1:04:51.664 ******* 2026-02-02 06:38:40.185240 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-02 06:38:40.185246 | orchestrator | 2026-02-02 06:38:40.185252 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-02 06:38:40.185257 | orchestrator | 2026-02-02 06:38:40.185264 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 06:38:40.185271 | orchestrator | Monday 02 February 2026 06:38:27 +0000 (0:00:03.088) 1:04:54.753 ******* 2026-02-02 06:38:40.185278 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-02 06:38:40.185285 | orchestrator | 2026-02-02 06:38:40.185292 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 06:38:40.185299 | orchestrator | Monday 02 February 2026 06:38:28 +0000 (0:00:01.144) 1:04:55.897 ******* 2026-02-02 06:38:40.185305 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:38:40.185311 | orchestrator | 2026-02-02 06:38:40.185318 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 06:38:40.185325 | orchestrator | Monday 02 February 2026 06:38:29 +0000 (0:00:01.427) 1:04:57.325 ******* 2026-02-02 06:38:40.185332 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:38:40.185339 | orchestrator | 2026-02-02 06:38:40.185345 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:38:40.185352 | orchestrator | Monday 02 February 2026 06:38:30 +0000 (0:00:01.142) 1:04:58.467 ******* 2026-02-02 06:38:40.185358 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:38:40.185364 | orchestrator | 2026-02-02 06:38:40.185370 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:38:40.185376 | orchestrator | Monday 02 February 2026 06:38:32 +0000 (0:00:01.464) 1:04:59.932 ******* 2026-02-02 06:38:40.185382 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:38:40.185387 | orchestrator | 2026-02-02 06:38:40.185393 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 06:38:40.185399 | orchestrator | Monday 02 February 2026 06:38:33 +0000 (0:00:01.199) 1:05:01.131 ******* 2026-02-02 06:38:40.185405 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:38:40.185410 | orchestrator | 2026-02-02 06:38:40.185416 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 06:38:40.185422 | orchestrator | Monday 02 February 2026 06:38:34 +0000 (0:00:01.113) 1:05:02.245 ******* 2026-02-02 06:38:40.185428 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:38:40.185433 | orchestrator | 2026-02-02 06:38:40.185439 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 06:38:40.185448 | orchestrator | Monday 02 February 2026 06:38:35 +0000 (0:00:01.200) 1:05:03.446 ******* 2026-02-02 06:38:40.185454 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:38:40.185460 | orchestrator | 2026-02-02 06:38:40.185465 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 06:38:40.185471 | orchestrator | Monday 02 February 2026 06:38:36 +0000 (0:00:01.123) 1:05:04.570 ******* 2026-02-02 06:38:40.185477 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:38:40.185483 | orchestrator | 2026-02-02 06:38:40.185488 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 06:38:40.185494 | orchestrator | Monday 02 February 2026 06:38:38 +0000 (0:00:01.172) 1:05:05.743 ******* 2026-02-02 06:38:40.185500 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:38:40.185506 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:38:40.185512 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:38:40.185517 | orchestrator | 2026-02-02 06:38:40.185523 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 06:38:40.185536 | orchestrator | Monday 02 February 2026 06:38:40 +0000 (0:00:02.003) 1:05:07.746 ******* 2026-02-02 06:39:05.601971 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:05.602150 | orchestrator | 2026-02-02 06:39:05.602171 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 06:39:05.602184 | orchestrator | Monday 02 February 2026 06:38:41 +0000 (0:00:01.725) 1:05:09.472 ******* 2026-02-02 06:39:05.602195 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:39:05.602207 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:39:05.602218 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:39:05.602229 | orchestrator | 2026-02-02 06:39:05.602240 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 06:39:05.602251 | orchestrator | Monday 02 February 2026 06:38:44 +0000 (0:00:02.904) 1:05:12.377 ******* 2026-02-02 06:39:05.602262 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-02 06:39:05.602274 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-02 06:39:05.602285 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-02 06:39:05.602296 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:05.602307 | orchestrator | 2026-02-02 06:39:05.602318 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 06:39:05.602329 | orchestrator | Monday 02 February 2026 06:38:46 +0000 (0:00:01.437) 1:05:13.814 ******* 2026-02-02 06:39:05.602343 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 06:39:05.602357 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 06:39:05.602368 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 06:39:05.602379 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:05.602390 | orchestrator | 2026-02-02 06:39:05.602409 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 06:39:05.602427 | orchestrator | Monday 02 February 2026 06:38:47 +0000 (0:00:01.689) 1:05:15.504 ******* 2026-02-02 06:39:05.602441 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:05.602456 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:05.602468 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:05.602517 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:05.602532 | orchestrator | 2026-02-02 06:39:05.602545 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 06:39:05.602558 | orchestrator | Monday 02 February 2026 06:38:49 +0000 (0:00:01.180) 1:05:16.685 ******* 2026-02-02 06:39:05.602592 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 06:38:42.436044', 'end': '2026-02-02 06:38:42.487021', 'delta': '0:00:00.050977', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 06:39:05.602610 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'c530967d0aad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 06:38:43.055492', 'end': '2026-02-02 06:38:43.091401', 'delta': '0:00:00.035909', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c530967d0aad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 06:39:05.602623 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'a68c96a70534', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 06:38:43.580976', 'end': '2026-02-02 06:38:43.632711', 'delta': '0:00:00.051735', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a68c96a70534'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 06:39:05.602636 | orchestrator | 2026-02-02 06:39:05.602649 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 06:39:05.602662 | orchestrator | Monday 02 February 2026 06:38:50 +0000 (0:00:01.180) 1:05:17.866 ******* 2026-02-02 06:39:05.602674 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:05.602687 | orchestrator | 2026-02-02 06:39:05.602725 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 06:39:05.602740 | orchestrator | Monday 02 February 2026 06:38:51 +0000 (0:00:01.307) 1:05:19.174 ******* 2026-02-02 06:39:05.602753 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:05.602766 | orchestrator | 2026-02-02 06:39:05.602779 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 06:39:05.602792 | orchestrator | Monday 02 February 2026 06:38:52 +0000 (0:00:01.228) 1:05:20.402 ******* 2026-02-02 06:39:05.602804 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:05.602816 | orchestrator | 2026-02-02 06:39:05.602837 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 06:39:05.602852 | orchestrator | Monday 02 February 2026 06:38:54 +0000 (0:00:01.214) 1:05:21.617 ******* 2026-02-02 06:39:05.602863 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:39:05.602874 | orchestrator | 2026-02-02 06:39:05.602899 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:39:05.602913 | orchestrator | Monday 02 February 2026 06:38:56 +0000 (0:00:02.061) 1:05:23.678 ******* 2026-02-02 06:39:05.602923 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:05.602934 | orchestrator | 2026-02-02 06:39:05.602944 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 06:39:05.602955 | orchestrator | Monday 02 February 2026 06:38:57 +0000 (0:00:01.118) 1:05:24.797 ******* 2026-02-02 06:39:05.602966 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:05.602977 | orchestrator | 2026-02-02 06:39:05.602987 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 06:39:05.602998 | orchestrator | Monday 02 February 2026 06:38:58 +0000 (0:00:01.129) 1:05:25.926 ******* 2026-02-02 06:39:05.603008 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:05.603019 | orchestrator | 2026-02-02 06:39:05.603030 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:39:05.603041 | orchestrator | Monday 02 February 2026 06:38:59 +0000 (0:00:01.265) 1:05:27.192 ******* 2026-02-02 06:39:05.603052 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:05.603062 | orchestrator | 2026-02-02 06:39:05.603079 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 06:39:05.603090 | orchestrator | Monday 02 February 2026 06:39:00 +0000 (0:00:01.225) 1:05:28.418 ******* 2026-02-02 06:39:05.603101 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:05.603111 | orchestrator | 2026-02-02 06:39:05.603122 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 06:39:05.603133 | orchestrator | Monday 02 February 2026 06:39:01 +0000 (0:00:01.148) 1:05:29.567 ******* 2026-02-02 06:39:05.603144 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:05.603155 | orchestrator | 2026-02-02 06:39:05.603165 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 06:39:05.603176 | orchestrator | Monday 02 February 2026 06:39:03 +0000 (0:00:01.154) 1:05:30.721 ******* 2026-02-02 06:39:05.603187 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:05.603198 | orchestrator | 2026-02-02 06:39:05.603217 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 06:39:05.603229 | orchestrator | Monday 02 February 2026 06:39:04 +0000 (0:00:01.254) 1:05:31.976 ******* 2026-02-02 06:39:05.603239 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:05.603250 | orchestrator | 2026-02-02 06:39:05.603261 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 06:39:05.603285 | orchestrator | Monday 02 February 2026 06:39:05 +0000 (0:00:01.187) 1:05:33.164 ******* 2026-02-02 06:39:08.155641 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:08.155851 | orchestrator | 2026-02-02 06:39:08.155874 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 06:39:08.155887 | orchestrator | Monday 02 February 2026 06:39:06 +0000 (0:00:01.161) 1:05:34.326 ******* 2026-02-02 06:39:08.155899 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:08.155911 | orchestrator | 2026-02-02 06:39:08.155922 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 06:39:08.155933 | orchestrator | Monday 02 February 2026 06:39:07 +0000 (0:00:01.169) 1:05:35.495 ******* 2026-02-02 06:39:08.155946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:39:08.155962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19', 'dm-uuid-LVM-7fojGdQjjxzlZ1d67G3lfXV0uQvvNrpG74l8TP6AWG5LY1LTlUkEVjmQPc2hTMkL'], 'uuids': ['0037b285-4ac2-45c2-8d5f-985073fa4cde'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL']}})  2026-02-02 06:39:08.156001 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012', 'scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '076229ff', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:39:08.156015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AITawh-CkpC-7L3c-Vqqe-GXUP-7eEh-WwcXRH', 'scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5', 'scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89']}})  2026-02-02 06:39:08.156041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:39:08.156053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:39:08.156084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 06:39:08.156097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:39:08.156109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom', 'dm-uuid-CRYPT-LUKS2-6399826b15f3492994c0bc4d1d3bf1c1-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:39:08.156128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:39:08.156140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89', 'dm-uuid-LVM-bGXwDmNnGJLl15xDO66UDgeGoDbpg8C0HvMSdsO6YcSLb4aDqGATNEcOudg8iQom'], 'uuids': ['6399826b-15f3-4929-94c0-bc4d1d3bf1c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom']}})  2026-02-02 06:39:08.156157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QbZaLy-yUYT-ccut-PcI7-2pGL-9PmJ-6NoPFr', 'scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28', 'scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19']}})  2026-02-02 06:39:08.156169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:39:08.156195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d8209b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:39:09.537535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:39:09.537656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:39:09.537675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL', 'dm-uuid-CRYPT-LUKS2-0037b2854ac245c28d5f985073fa4cde-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:39:09.537766 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:09.537784 | orchestrator | 2026-02-02 06:39:09.537797 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 06:39:09.537809 | orchestrator | Monday 02 February 2026 06:39:09 +0000 (0:00:01.389) 1:05:36.885 ******* 2026-02-02 06:39:09.537821 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:09.537836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19', 'dm-uuid-LVM-7fojGdQjjxzlZ1d67G3lfXV0uQvvNrpG74l8TP6AWG5LY1LTlUkEVjmQPc2hTMkL'], 'uuids': ['0037b285-4ac2-45c2-8d5f-985073fa4cde'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:09.537871 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012', 'scsi-SQEMU_QEMU_HARDDISK_076229ff-17a9-47be-973d-14b64a36a012'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '076229ff', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:09.537905 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AITawh-CkpC-7L3c-Vqqe-GXUP-7eEh-WwcXRH', 'scsi-0QEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5', 'scsi-SQEMU_QEMU_HARDDISK_9dac4244-a4bc-44f9-ad81-53a595dd15e5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:09.537926 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:09.537938 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:09.537950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-26-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:09.537970 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:09.537988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom', 'dm-uuid-CRYPT-LUKS2-6399826b15f3492994c0bc4d1d3bf1c1-HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:14.851440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:14.851600 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89-osd--block--6932a8d0--72db--59d0--a33a--0c6e2cbd6a89', 'dm-uuid-LVM-bGXwDmNnGJLl15xDO66UDgeGoDbpg8C0HvMSdsO6YcSLb4aDqGATNEcOudg8iQom'], 'uuids': ['6399826b-15f3-4929-94c0-bc4d1d3bf1c1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9dac4244', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['HvMSds-O6Yc-SLb4-aDqG-ATNE-cOud-g8iQom']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:14.851625 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-QbZaLy-yUYT-ccut-PcI7-2pGL-9PmJ-6NoPFr', 'scsi-0QEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28', 'scsi-SQEMU_QEMU_HARDDISK_2d3e981f-8554-4288-941a-275f46913f28'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d3e981f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--106e1245--4ea8--54a2--9b27--5c2b147fae19-osd--block--106e1245--4ea8--54a2--9b27--5c2b147fae19']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:14.851666 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:14.851741 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d8209b1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1', 'scsi-SQEMU_QEMU_HARDDISK_6d8209b1-65e9-4122-ac58-4b8b748af111-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:14.851759 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:14.851771 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:14.851791 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL', 'dm-uuid-CRYPT-LUKS2-0037b2854ac245c28d5f985073fa4cde-74l8TP-6AWG-5LY1-LTlU-kEVj-mQPc-2hTMkL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:39:14.851804 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:14.851817 | orchestrator | 2026-02-02 06:39:14.851830 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 06:39:14.851842 | orchestrator | Monday 02 February 2026 06:39:10 +0000 (0:00:01.420) 1:05:38.306 ******* 2026-02-02 06:39:14.851853 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:14.851865 | orchestrator | 2026-02-02 06:39:14.851876 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 06:39:14.851887 | orchestrator | Monday 02 February 2026 06:39:12 +0000 (0:00:01.441) 1:05:39.748 ******* 2026-02-02 06:39:14.851897 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:14.851908 | orchestrator | 2026-02-02 06:39:14.851919 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:39:14.851930 | orchestrator | Monday 02 February 2026 06:39:13 +0000 (0:00:01.130) 1:05:40.879 ******* 2026-02-02 06:39:14.851940 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:14.851951 | orchestrator | 2026-02-02 06:39:14.851962 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:39:14.851981 | orchestrator | Monday 02 February 2026 06:39:14 +0000 (0:00:01.542) 1:05:42.421 ******* 2026-02-02 06:39:57.112561 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.112653 | orchestrator | 2026-02-02 06:39:57.112664 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:39:57.112673 | orchestrator | Monday 02 February 2026 06:39:15 +0000 (0:00:01.124) 1:05:43.546 ******* 2026-02-02 06:39:57.112680 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.112687 | orchestrator | 2026-02-02 06:39:57.112694 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:39:57.112701 | orchestrator | Monday 02 February 2026 06:39:17 +0000 (0:00:01.704) 1:05:45.251 ******* 2026-02-02 06:39:57.112707 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.112714 | orchestrator | 2026-02-02 06:39:57.112770 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 06:39:57.112777 | orchestrator | Monday 02 February 2026 06:39:18 +0000 (0:00:01.192) 1:05:46.444 ******* 2026-02-02 06:39:57.112784 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-02 06:39:57.112792 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-02 06:39:57.112798 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-02 06:39:57.112805 | orchestrator | 2026-02-02 06:39:57.112812 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 06:39:57.112819 | orchestrator | Monday 02 February 2026 06:39:20 +0000 (0:00:01.685) 1:05:48.129 ******* 2026-02-02 06:39:57.112826 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-02 06:39:57.112833 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-02 06:39:57.112840 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-02 06:39:57.112846 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.112872 | orchestrator | 2026-02-02 06:39:57.112879 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 06:39:57.112898 | orchestrator | Monday 02 February 2026 06:39:21 +0000 (0:00:01.316) 1:05:49.446 ******* 2026-02-02 06:39:57.112905 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-02 06:39:57.112912 | orchestrator | 2026-02-02 06:39:57.112919 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:39:57.112927 | orchestrator | Monday 02 February 2026 06:39:22 +0000 (0:00:01.131) 1:05:50.578 ******* 2026-02-02 06:39:57.112934 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.112940 | orchestrator | 2026-02-02 06:39:57.112947 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:39:57.112953 | orchestrator | Monday 02 February 2026 06:39:24 +0000 (0:00:01.124) 1:05:51.703 ******* 2026-02-02 06:39:57.112960 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.112967 | orchestrator | 2026-02-02 06:39:57.112973 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:39:57.112980 | orchestrator | Monday 02 February 2026 06:39:25 +0000 (0:00:01.116) 1:05:52.819 ******* 2026-02-02 06:39:57.112986 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.112993 | orchestrator | 2026-02-02 06:39:57.113000 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:39:57.113006 | orchestrator | Monday 02 February 2026 06:39:26 +0000 (0:00:01.121) 1:05:53.940 ******* 2026-02-02 06:39:57.113013 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:57.113020 | orchestrator | 2026-02-02 06:39:57.113026 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:39:57.113033 | orchestrator | Monday 02 February 2026 06:39:27 +0000 (0:00:01.232) 1:05:55.173 ******* 2026-02-02 06:39:57.113040 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 06:39:57.113047 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 06:39:57.113053 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 06:39:57.113060 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.113067 | orchestrator | 2026-02-02 06:39:57.113073 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:39:57.113080 | orchestrator | Monday 02 February 2026 06:39:28 +0000 (0:00:01.359) 1:05:56.533 ******* 2026-02-02 06:39:57.113086 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 06:39:57.113093 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 06:39:57.113100 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 06:39:57.113106 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.113113 | orchestrator | 2026-02-02 06:39:57.113121 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:39:57.113129 | orchestrator | Monday 02 February 2026 06:39:30 +0000 (0:00:01.382) 1:05:57.915 ******* 2026-02-02 06:39:57.113136 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 06:39:57.113144 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 06:39:57.113152 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 06:39:57.113159 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.113167 | orchestrator | 2026-02-02 06:39:57.113175 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:39:57.113182 | orchestrator | Monday 02 February 2026 06:39:32 +0000 (0:00:01.851) 1:05:59.767 ******* 2026-02-02 06:39:57.113190 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:57.113198 | orchestrator | 2026-02-02 06:39:57.113205 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:39:57.113213 | orchestrator | Monday 02 February 2026 06:39:33 +0000 (0:00:01.147) 1:06:00.915 ******* 2026-02-02 06:39:57.113221 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-02 06:39:57.113235 | orchestrator | 2026-02-02 06:39:57.113243 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 06:39:57.113250 | orchestrator | Monday 02 February 2026 06:39:35 +0000 (0:00:01.787) 1:06:02.702 ******* 2026-02-02 06:39:57.113271 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:39:57.113279 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:39:57.113287 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:39:57.113295 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:39:57.113303 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-02 06:39:57.113310 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:39:57.113318 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:39:57.113326 | orchestrator | 2026-02-02 06:39:57.113333 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 06:39:57.113341 | orchestrator | Monday 02 February 2026 06:39:36 +0000 (0:00:01.844) 1:06:04.547 ******* 2026-02-02 06:39:57.113348 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:39:57.113356 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:39:57.113365 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:39:57.113373 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:39:57.113380 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-02 06:39:57.113388 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-02 06:39:57.113396 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:39:57.113403 | orchestrator | 2026-02-02 06:39:57.113415 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-02 06:39:57.113423 | orchestrator | Monday 02 February 2026 06:39:39 +0000 (0:00:02.296) 1:06:06.843 ******* 2026-02-02 06:39:57.113430 | orchestrator | changed: [testbed-node-4] 2026-02-02 06:39:57.113438 | orchestrator | 2026-02-02 06:39:57.113446 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-02 06:39:57.113453 | orchestrator | Monday 02 February 2026 06:39:41 +0000 (0:00:01.940) 1:06:08.784 ******* 2026-02-02 06:39:57.113461 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 06:39:57.113469 | orchestrator | 2026-02-02 06:39:57.113477 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-02 06:39:57.113484 | orchestrator | Monday 02 February 2026 06:39:43 +0000 (0:00:02.532) 1:06:11.317 ******* 2026-02-02 06:39:57.113490 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 06:39:57.113497 | orchestrator | 2026-02-02 06:39:57.113503 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 06:39:57.113510 | orchestrator | Monday 02 February 2026 06:39:45 +0000 (0:00:01.937) 1:06:13.254 ******* 2026-02-02 06:39:57.113516 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-02 06:39:57.113523 | orchestrator | 2026-02-02 06:39:57.113530 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 06:39:57.113536 | orchestrator | Monday 02 February 2026 06:39:46 +0000 (0:00:01.155) 1:06:14.410 ******* 2026-02-02 06:39:57.113543 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-02 06:39:57.113550 | orchestrator | 2026-02-02 06:39:57.113556 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 06:39:57.113567 | orchestrator | Monday 02 February 2026 06:39:47 +0000 (0:00:01.118) 1:06:15.528 ******* 2026-02-02 06:39:57.113574 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.113580 | orchestrator | 2026-02-02 06:39:57.113587 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 06:39:57.113593 | orchestrator | Monday 02 February 2026 06:39:49 +0000 (0:00:01.117) 1:06:16.646 ******* 2026-02-02 06:39:57.113600 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:57.113607 | orchestrator | 2026-02-02 06:39:57.113613 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 06:39:57.113620 | orchestrator | Monday 02 February 2026 06:39:50 +0000 (0:00:01.513) 1:06:18.159 ******* 2026-02-02 06:39:57.113626 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:57.113633 | orchestrator | 2026-02-02 06:39:57.113639 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 06:39:57.113646 | orchestrator | Monday 02 February 2026 06:39:52 +0000 (0:00:01.553) 1:06:19.712 ******* 2026-02-02 06:39:57.113652 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:39:57.113659 | orchestrator | 2026-02-02 06:39:57.113666 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 06:39:57.113672 | orchestrator | Monday 02 February 2026 06:39:53 +0000 (0:00:01.530) 1:06:21.243 ******* 2026-02-02 06:39:57.113679 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.113685 | orchestrator | 2026-02-02 06:39:57.113692 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 06:39:57.113699 | orchestrator | Monday 02 February 2026 06:39:54 +0000 (0:00:01.180) 1:06:22.424 ******* 2026-02-02 06:39:57.113706 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.113712 | orchestrator | 2026-02-02 06:39:57.113735 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 06:39:57.113742 | orchestrator | Monday 02 February 2026 06:39:55 +0000 (0:00:01.134) 1:06:23.558 ******* 2026-02-02 06:39:57.113749 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:39:57.113756 | orchestrator | 2026-02-02 06:39:57.113762 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 06:39:57.113774 | orchestrator | Monday 02 February 2026 06:39:57 +0000 (0:00:01.123) 1:06:24.681 ******* 2026-02-02 06:40:37.338438 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:40:37.338553 | orchestrator | 2026-02-02 06:40:37.338571 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 06:40:37.338584 | orchestrator | Monday 02 February 2026 06:39:58 +0000 (0:00:01.516) 1:06:26.198 ******* 2026-02-02 06:40:37.338595 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:40:37.338606 | orchestrator | 2026-02-02 06:40:37.338617 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 06:40:37.338628 | orchestrator | Monday 02 February 2026 06:40:00 +0000 (0:00:01.540) 1:06:27.739 ******* 2026-02-02 06:40:37.338639 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.338651 | orchestrator | 2026-02-02 06:40:37.338662 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 06:40:37.338673 | orchestrator | Monday 02 February 2026 06:40:00 +0000 (0:00:00.807) 1:06:28.546 ******* 2026-02-02 06:40:37.338684 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.338694 | orchestrator | 2026-02-02 06:40:37.338705 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 06:40:37.338716 | orchestrator | Monday 02 February 2026 06:40:01 +0000 (0:00:00.785) 1:06:29.332 ******* 2026-02-02 06:40:37.338780 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:40:37.338793 | orchestrator | 2026-02-02 06:40:37.338804 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 06:40:37.338815 | orchestrator | Monday 02 February 2026 06:40:02 +0000 (0:00:00.762) 1:06:30.094 ******* 2026-02-02 06:40:37.338826 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:40:37.338836 | orchestrator | 2026-02-02 06:40:37.338847 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 06:40:37.338882 | orchestrator | Monday 02 February 2026 06:40:03 +0000 (0:00:00.807) 1:06:30.902 ******* 2026-02-02 06:40:37.338893 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:40:37.338904 | orchestrator | 2026-02-02 06:40:37.338929 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 06:40:37.338940 | orchestrator | Monday 02 February 2026 06:40:04 +0000 (0:00:00.839) 1:06:31.741 ******* 2026-02-02 06:40:37.338951 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.338963 | orchestrator | 2026-02-02 06:40:37.338981 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 06:40:37.338998 | orchestrator | Monday 02 February 2026 06:40:04 +0000 (0:00:00.787) 1:06:32.529 ******* 2026-02-02 06:40:37.339017 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339035 | orchestrator | 2026-02-02 06:40:37.339053 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 06:40:37.339071 | orchestrator | Monday 02 February 2026 06:40:05 +0000 (0:00:00.919) 1:06:33.448 ******* 2026-02-02 06:40:37.339089 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339106 | orchestrator | 2026-02-02 06:40:37.339123 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 06:40:37.339143 | orchestrator | Monday 02 February 2026 06:40:06 +0000 (0:00:00.774) 1:06:34.223 ******* 2026-02-02 06:40:37.339161 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:40:37.339180 | orchestrator | 2026-02-02 06:40:37.339199 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 06:40:37.339216 | orchestrator | Monday 02 February 2026 06:40:07 +0000 (0:00:00.777) 1:06:35.001 ******* 2026-02-02 06:40:37.339234 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:40:37.339245 | orchestrator | 2026-02-02 06:40:37.339256 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 06:40:37.339267 | orchestrator | Monday 02 February 2026 06:40:08 +0000 (0:00:00.847) 1:06:35.849 ******* 2026-02-02 06:40:37.339277 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339287 | orchestrator | 2026-02-02 06:40:37.339298 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 06:40:37.339309 | orchestrator | Monday 02 February 2026 06:40:09 +0000 (0:00:00.747) 1:06:36.597 ******* 2026-02-02 06:40:37.339319 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339330 | orchestrator | 2026-02-02 06:40:37.339341 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 06:40:37.339351 | orchestrator | Monday 02 February 2026 06:40:09 +0000 (0:00:00.794) 1:06:37.391 ******* 2026-02-02 06:40:37.339362 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339372 | orchestrator | 2026-02-02 06:40:37.339383 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 06:40:37.339394 | orchestrator | Monday 02 February 2026 06:40:10 +0000 (0:00:00.784) 1:06:38.175 ******* 2026-02-02 06:40:37.339404 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339415 | orchestrator | 2026-02-02 06:40:37.339425 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 06:40:37.339436 | orchestrator | Monday 02 February 2026 06:40:11 +0000 (0:00:00.763) 1:06:38.939 ******* 2026-02-02 06:40:37.339447 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339457 | orchestrator | 2026-02-02 06:40:37.339468 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 06:40:37.339479 | orchestrator | Monday 02 February 2026 06:40:12 +0000 (0:00:00.738) 1:06:39.678 ******* 2026-02-02 06:40:37.339489 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339500 | orchestrator | 2026-02-02 06:40:37.339510 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 06:40:37.339522 | orchestrator | Monday 02 February 2026 06:40:12 +0000 (0:00:00.763) 1:06:40.442 ******* 2026-02-02 06:40:37.339532 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339543 | orchestrator | 2026-02-02 06:40:37.339554 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 06:40:37.339585 | orchestrator | Monday 02 February 2026 06:40:13 +0000 (0:00:00.771) 1:06:41.213 ******* 2026-02-02 06:40:37.339596 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339606 | orchestrator | 2026-02-02 06:40:37.339617 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 06:40:37.339628 | orchestrator | Monday 02 February 2026 06:40:14 +0000 (0:00:00.746) 1:06:41.959 ******* 2026-02-02 06:40:37.339639 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339649 | orchestrator | 2026-02-02 06:40:37.339678 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 06:40:37.339689 | orchestrator | Monday 02 February 2026 06:40:15 +0000 (0:00:00.862) 1:06:42.821 ******* 2026-02-02 06:40:37.339700 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339710 | orchestrator | 2026-02-02 06:40:37.339721 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 06:40:37.339774 | orchestrator | Monday 02 February 2026 06:40:16 +0000 (0:00:00.845) 1:06:43.667 ******* 2026-02-02 06:40:37.339786 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339797 | orchestrator | 2026-02-02 06:40:37.339808 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 06:40:37.339818 | orchestrator | Monday 02 February 2026 06:40:16 +0000 (0:00:00.759) 1:06:44.426 ******* 2026-02-02 06:40:37.339829 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.339840 | orchestrator | 2026-02-02 06:40:37.339851 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 06:40:37.339861 | orchestrator | Monday 02 February 2026 06:40:17 +0000 (0:00:00.821) 1:06:45.248 ******* 2026-02-02 06:40:37.339872 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:40:37.339883 | orchestrator | 2026-02-02 06:40:37.339894 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 06:40:37.339904 | orchestrator | Monday 02 February 2026 06:40:19 +0000 (0:00:01.597) 1:06:46.845 ******* 2026-02-02 06:40:37.339915 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:40:37.339925 | orchestrator | 2026-02-02 06:40:37.339936 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 06:40:37.339947 | orchestrator | Monday 02 February 2026 06:40:21 +0000 (0:00:01.901) 1:06:48.747 ******* 2026-02-02 06:40:37.339958 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-02 06:40:37.339970 | orchestrator | 2026-02-02 06:40:37.339988 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 06:40:37.339999 | orchestrator | Monday 02 February 2026 06:40:22 +0000 (0:00:01.129) 1:06:49.876 ******* 2026-02-02 06:40:37.340010 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.340021 | orchestrator | 2026-02-02 06:40:37.340031 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 06:40:37.340042 | orchestrator | Monday 02 February 2026 06:40:23 +0000 (0:00:01.159) 1:06:51.036 ******* 2026-02-02 06:40:37.340053 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.340063 | orchestrator | 2026-02-02 06:40:37.340074 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 06:40:37.340085 | orchestrator | Monday 02 February 2026 06:40:24 +0000 (0:00:01.113) 1:06:52.150 ******* 2026-02-02 06:40:37.340095 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 06:40:37.340106 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 06:40:37.340117 | orchestrator | 2026-02-02 06:40:37.340128 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 06:40:37.340139 | orchestrator | Monday 02 February 2026 06:40:26 +0000 (0:00:01.781) 1:06:53.931 ******* 2026-02-02 06:40:37.340150 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:40:37.340160 | orchestrator | 2026-02-02 06:40:37.340171 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 06:40:37.340189 | orchestrator | Monday 02 February 2026 06:40:27 +0000 (0:00:01.416) 1:06:55.347 ******* 2026-02-02 06:40:37.340200 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.340211 | orchestrator | 2026-02-02 06:40:37.340222 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 06:40:37.340232 | orchestrator | Monday 02 February 2026 06:40:28 +0000 (0:00:01.111) 1:06:56.459 ******* 2026-02-02 06:40:37.340243 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.340254 | orchestrator | 2026-02-02 06:40:37.340265 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 06:40:37.340275 | orchestrator | Monday 02 February 2026 06:40:30 +0000 (0:00:01.331) 1:06:57.791 ******* 2026-02-02 06:40:37.340286 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.340297 | orchestrator | 2026-02-02 06:40:37.340307 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 06:40:37.340318 | orchestrator | Monday 02 February 2026 06:40:30 +0000 (0:00:00.777) 1:06:58.569 ******* 2026-02-02 06:40:37.340329 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-02 06:40:37.340339 | orchestrator | 2026-02-02 06:40:37.340350 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 06:40:37.340361 | orchestrator | Monday 02 February 2026 06:40:32 +0000 (0:00:01.115) 1:06:59.685 ******* 2026-02-02 06:40:37.340371 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:40:37.340382 | orchestrator | 2026-02-02 06:40:37.340393 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 06:40:37.340404 | orchestrator | Monday 02 February 2026 06:40:33 +0000 (0:00:01.750) 1:07:01.435 ******* 2026-02-02 06:40:37.340415 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 06:40:37.340426 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 06:40:37.340436 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 06:40:37.340447 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.340457 | orchestrator | 2026-02-02 06:40:37.340468 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 06:40:37.340479 | orchestrator | Monday 02 February 2026 06:40:35 +0000 (0:00:01.153) 1:07:02.588 ******* 2026-02-02 06:40:37.340489 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.340500 | orchestrator | 2026-02-02 06:40:37.340519 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 06:40:37.340538 | orchestrator | Monday 02 February 2026 06:40:36 +0000 (0:00:01.142) 1:07:03.730 ******* 2026-02-02 06:40:37.340558 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:40:37.340577 | orchestrator | 2026-02-02 06:40:37.340605 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 06:41:19.989399 | orchestrator | Monday 02 February 2026 06:40:37 +0000 (0:00:01.178) 1:07:04.909 ******* 2026-02-02 06:41:19.989540 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.989561 | orchestrator | 2026-02-02 06:41:19.989579 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 06:41:19.989596 | orchestrator | Monday 02 February 2026 06:40:38 +0000 (0:00:01.141) 1:07:06.051 ******* 2026-02-02 06:41:19.989612 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.989629 | orchestrator | 2026-02-02 06:41:19.989645 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 06:41:19.989660 | orchestrator | Monday 02 February 2026 06:40:39 +0000 (0:00:01.138) 1:07:07.190 ******* 2026-02-02 06:41:19.989676 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.989693 | orchestrator | 2026-02-02 06:41:19.989709 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 06:41:19.989726 | orchestrator | Monday 02 February 2026 06:40:40 +0000 (0:00:00.771) 1:07:07.962 ******* 2026-02-02 06:41:19.989771 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:41:19.989790 | orchestrator | 2026-02-02 06:41:19.989836 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 06:41:19.989855 | orchestrator | Monday 02 February 2026 06:40:42 +0000 (0:00:02.106) 1:07:10.068 ******* 2026-02-02 06:41:19.989871 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:41:19.989887 | orchestrator | 2026-02-02 06:41:19.989904 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 06:41:19.989918 | orchestrator | Monday 02 February 2026 06:40:43 +0000 (0:00:00.773) 1:07:10.842 ******* 2026-02-02 06:41:19.989932 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-02 06:41:19.989945 | orchestrator | 2026-02-02 06:41:19.989976 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 06:41:19.989990 | orchestrator | Monday 02 February 2026 06:40:44 +0000 (0:00:01.308) 1:07:12.150 ******* 2026-02-02 06:41:19.990004 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.990093 | orchestrator | 2026-02-02 06:41:19.990108 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 06:41:19.990123 | orchestrator | Monday 02 February 2026 06:40:45 +0000 (0:00:01.175) 1:07:13.326 ******* 2026-02-02 06:41:19.990138 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.990152 | orchestrator | 2026-02-02 06:41:19.990167 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 06:41:19.990182 | orchestrator | Monday 02 February 2026 06:40:46 +0000 (0:00:01.170) 1:07:14.497 ******* 2026-02-02 06:41:19.990197 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.990211 | orchestrator | 2026-02-02 06:41:19.990225 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 06:41:19.990239 | orchestrator | Monday 02 February 2026 06:40:48 +0000 (0:00:01.174) 1:07:15.671 ******* 2026-02-02 06:41:19.990254 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.990268 | orchestrator | 2026-02-02 06:41:19.990283 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 06:41:19.990297 | orchestrator | Monday 02 February 2026 06:40:49 +0000 (0:00:01.151) 1:07:16.823 ******* 2026-02-02 06:41:19.990311 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.990325 | orchestrator | 2026-02-02 06:41:19.990338 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 06:41:19.990353 | orchestrator | Monday 02 February 2026 06:40:50 +0000 (0:00:01.178) 1:07:18.002 ******* 2026-02-02 06:41:19.990368 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.990382 | orchestrator | 2026-02-02 06:41:19.990396 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 06:41:19.990411 | orchestrator | Monday 02 February 2026 06:40:51 +0000 (0:00:01.168) 1:07:19.170 ******* 2026-02-02 06:41:19.990425 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.990438 | orchestrator | 2026-02-02 06:41:19.990453 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 06:41:19.990467 | orchestrator | Monday 02 February 2026 06:40:52 +0000 (0:00:01.120) 1:07:20.291 ******* 2026-02-02 06:41:19.990481 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.990495 | orchestrator | 2026-02-02 06:41:19.990511 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 06:41:19.990524 | orchestrator | Monday 02 February 2026 06:40:53 +0000 (0:00:01.117) 1:07:21.409 ******* 2026-02-02 06:41:19.990538 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:41:19.990553 | orchestrator | 2026-02-02 06:41:19.990566 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 06:41:19.990581 | orchestrator | Monday 02 February 2026 06:40:54 +0000 (0:00:00.802) 1:07:22.211 ******* 2026-02-02 06:41:19.990594 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-02 06:41:19.990610 | orchestrator | 2026-02-02 06:41:19.990624 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 06:41:19.990639 | orchestrator | Monday 02 February 2026 06:40:55 +0000 (0:00:01.314) 1:07:23.526 ******* 2026-02-02 06:41:19.990667 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-02 06:41:19.990681 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-02 06:41:19.990695 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-02 06:41:19.990710 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-02 06:41:19.990724 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-02 06:41:19.990737 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-02 06:41:19.990784 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-02 06:41:19.990799 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-02 06:41:19.990812 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 06:41:19.990824 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 06:41:19.990837 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 06:41:19.990874 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 06:41:19.990887 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 06:41:19.990899 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 06:41:19.990912 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-02 06:41:19.990925 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-02 06:41:19.990939 | orchestrator | 2026-02-02 06:41:19.990952 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 06:41:19.990966 | orchestrator | Monday 02 February 2026 06:41:02 +0000 (0:00:06.227) 1:07:29.753 ******* 2026-02-02 06:41:19.990979 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-02 06:41:19.990990 | orchestrator | 2026-02-02 06:41:19.991003 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-02 06:41:19.991015 | orchestrator | Monday 02 February 2026 06:41:03 +0000 (0:00:01.103) 1:07:30.857 ******* 2026-02-02 06:41:19.991028 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 06:41:19.991043 | orchestrator | 2026-02-02 06:41:19.991056 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-02 06:41:19.991068 | orchestrator | Monday 02 February 2026 06:41:04 +0000 (0:00:01.472) 1:07:32.329 ******* 2026-02-02 06:41:19.991080 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 06:41:19.991093 | orchestrator | 2026-02-02 06:41:19.991129 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 06:41:19.991142 | orchestrator | Monday 02 February 2026 06:41:06 +0000 (0:00:01.644) 1:07:33.974 ******* 2026-02-02 06:41:19.991154 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.991166 | orchestrator | 2026-02-02 06:41:19.991180 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 06:41:19.991192 | orchestrator | Monday 02 February 2026 06:41:07 +0000 (0:00:00.791) 1:07:34.765 ******* 2026-02-02 06:41:19.991204 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.991217 | orchestrator | 2026-02-02 06:41:19.991230 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 06:41:19.991244 | orchestrator | Monday 02 February 2026 06:41:07 +0000 (0:00:00.756) 1:07:35.521 ******* 2026-02-02 06:41:19.991256 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.991269 | orchestrator | 2026-02-02 06:41:19.991283 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 06:41:19.991296 | orchestrator | Monday 02 February 2026 06:41:08 +0000 (0:00:00.787) 1:07:36.309 ******* 2026-02-02 06:41:19.991309 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.991322 | orchestrator | 2026-02-02 06:41:19.991335 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 06:41:19.991357 | orchestrator | Monday 02 February 2026 06:41:09 +0000 (0:00:00.749) 1:07:37.059 ******* 2026-02-02 06:41:19.991371 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.991383 | orchestrator | 2026-02-02 06:41:19.991395 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 06:41:19.991407 | orchestrator | Monday 02 February 2026 06:41:10 +0000 (0:00:00.763) 1:07:37.823 ******* 2026-02-02 06:41:19.991419 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.991432 | orchestrator | 2026-02-02 06:41:19.991445 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 06:41:19.991458 | orchestrator | Monday 02 February 2026 06:41:11 +0000 (0:00:00.816) 1:07:38.640 ******* 2026-02-02 06:41:19.991471 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.991484 | orchestrator | 2026-02-02 06:41:19.991496 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 06:41:19.991508 | orchestrator | Monday 02 February 2026 06:41:11 +0000 (0:00:00.808) 1:07:39.448 ******* 2026-02-02 06:41:19.991521 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.991534 | orchestrator | 2026-02-02 06:41:19.991546 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 06:41:19.991558 | orchestrator | Monday 02 February 2026 06:41:12 +0000 (0:00:00.875) 1:07:40.323 ******* 2026-02-02 06:41:19.991570 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.991583 | orchestrator | 2026-02-02 06:41:19.991596 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 06:41:19.991608 | orchestrator | Monday 02 February 2026 06:41:13 +0000 (0:00:00.785) 1:07:41.108 ******* 2026-02-02 06:41:19.991621 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.991635 | orchestrator | 2026-02-02 06:41:19.991647 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 06:41:19.991659 | orchestrator | Monday 02 February 2026 06:41:14 +0000 (0:00:00.761) 1:07:41.870 ******* 2026-02-02 06:41:19.991671 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:41:19.991685 | orchestrator | 2026-02-02 06:41:19.991697 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 06:41:19.991709 | orchestrator | Monday 02 February 2026 06:41:15 +0000 (0:00:00.841) 1:07:42.712 ******* 2026-02-02 06:41:19.991722 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-02 06:41:19.991735 | orchestrator | 2026-02-02 06:41:19.991810 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 06:41:19.991825 | orchestrator | Monday 02 February 2026 06:41:19 +0000 (0:00:04.007) 1:07:46.719 ******* 2026-02-02 06:41:19.991838 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 06:41:19.991853 | orchestrator | 2026-02-02 06:41:19.991878 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 06:42:00.614835 | orchestrator | Monday 02 February 2026 06:41:19 +0000 (0:00:00.836) 1:07:47.555 ******* 2026-02-02 06:42:00.614916 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-02 06:42:00.614925 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-02 06:42:00.614931 | orchestrator | 2026-02-02 06:42:00.614935 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 06:42:00.614954 | orchestrator | Monday 02 February 2026 06:41:24 +0000 (0:00:04.557) 1:07:52.113 ******* 2026-02-02 06:42:00.614958 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:42:00.614963 | orchestrator | 2026-02-02 06:42:00.614967 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 06:42:00.614971 | orchestrator | Monday 02 February 2026 06:41:25 +0000 (0:00:00.760) 1:07:52.874 ******* 2026-02-02 06:42:00.614975 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:42:00.614978 | orchestrator | 2026-02-02 06:42:00.614993 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:42:00.614998 | orchestrator | Monday 02 February 2026 06:41:26 +0000 (0:00:00.747) 1:07:53.622 ******* 2026-02-02 06:42:00.615002 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:42:00.615006 | orchestrator | 2026-02-02 06:42:00.615009 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:42:00.615014 | orchestrator | Monday 02 February 2026 06:41:26 +0000 (0:00:00.844) 1:07:54.466 ******* 2026-02-02 06:42:00.615017 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:42:00.615021 | orchestrator | 2026-02-02 06:42:00.615025 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:42:00.615029 | orchestrator | Monday 02 February 2026 06:41:27 +0000 (0:00:00.836) 1:07:55.303 ******* 2026-02-02 06:42:00.615032 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:42:00.615036 | orchestrator | 2026-02-02 06:42:00.615040 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:42:00.615044 | orchestrator | Monday 02 February 2026 06:41:28 +0000 (0:00:00.777) 1:07:56.080 ******* 2026-02-02 06:42:00.615047 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:42:00.615052 | orchestrator | 2026-02-02 06:42:00.615056 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:42:00.615060 | orchestrator | Monday 02 February 2026 06:41:29 +0000 (0:00:00.951) 1:07:57.031 ******* 2026-02-02 06:42:00.615064 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 06:42:00.615068 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 06:42:00.615072 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 06:42:00.615075 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:42:00.615079 | orchestrator | 2026-02-02 06:42:00.615083 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:42:00.615087 | orchestrator | Monday 02 February 2026 06:41:31 +0000 (0:00:01.576) 1:07:58.608 ******* 2026-02-02 06:42:00.615090 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 06:42:00.615094 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 06:42:00.615098 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 06:42:00.615101 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:42:00.615105 | orchestrator | 2026-02-02 06:42:00.615109 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:42:00.615112 | orchestrator | Monday 02 February 2026 06:41:32 +0000 (0:00:01.059) 1:07:59.667 ******* 2026-02-02 06:42:00.615116 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-02 06:42:00.615120 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-02 06:42:00.615124 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-02 06:42:00.615127 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:42:00.615131 | orchestrator | 2026-02-02 06:42:00.615135 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:42:00.615138 | orchestrator | Monday 02 February 2026 06:41:33 +0000 (0:00:01.057) 1:08:00.724 ******* 2026-02-02 06:42:00.615142 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:42:00.615146 | orchestrator | 2026-02-02 06:42:00.615150 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:42:00.615153 | orchestrator | Monday 02 February 2026 06:41:33 +0000 (0:00:00.832) 1:08:01.557 ******* 2026-02-02 06:42:00.615161 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-02 06:42:00.615165 | orchestrator | 2026-02-02 06:42:00.615168 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 06:42:00.615172 | orchestrator | Monday 02 February 2026 06:41:34 +0000 (0:00:00.991) 1:08:02.549 ******* 2026-02-02 06:42:00.615176 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:42:00.615180 | orchestrator | 2026-02-02 06:42:00.615183 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-02 06:42:00.615187 | orchestrator | Monday 02 February 2026 06:41:36 +0000 (0:00:01.463) 1:08:04.012 ******* 2026-02-02 06:42:00.615191 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-02-02 06:42:00.615195 | orchestrator | 2026-02-02 06:42:00.615208 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-02 06:42:00.615212 | orchestrator | Monday 02 February 2026 06:41:37 +0000 (0:00:01.081) 1:08:05.094 ******* 2026-02-02 06:42:00.615216 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:42:00.615220 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-02 06:42:00.615224 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 06:42:00.615227 | orchestrator | 2026-02-02 06:42:00.615231 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:42:00.615235 | orchestrator | Monday 02 February 2026 06:41:40 +0000 (0:00:03.194) 1:08:08.288 ******* 2026-02-02 06:42:00.615238 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-02 06:42:00.615242 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-02 06:42:00.615246 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:42:00.615250 | orchestrator | 2026-02-02 06:42:00.615253 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-02 06:42:00.615257 | orchestrator | Monday 02 February 2026 06:41:42 +0000 (0:00:01.945) 1:08:10.234 ******* 2026-02-02 06:42:00.615261 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:42:00.615264 | orchestrator | 2026-02-02 06:42:00.615268 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-02 06:42:00.615272 | orchestrator | Monday 02 February 2026 06:41:43 +0000 (0:00:00.791) 1:08:11.026 ******* 2026-02-02 06:42:00.615276 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-02-02 06:42:00.615280 | orchestrator | 2026-02-02 06:42:00.615285 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-02 06:42:00.615291 | orchestrator | Monday 02 February 2026 06:41:44 +0000 (0:00:01.259) 1:08:12.286 ******* 2026-02-02 06:42:00.615297 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 06:42:00.615304 | orchestrator | 2026-02-02 06:42:00.615310 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-02 06:42:00.615317 | orchestrator | Monday 02 February 2026 06:41:46 +0000 (0:00:01.623) 1:08:13.909 ******* 2026-02-02 06:42:00.615323 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:42:00.615329 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-02 06:42:00.615334 | orchestrator | 2026-02-02 06:42:00.615338 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-02 06:42:00.615342 | orchestrator | Monday 02 February 2026 06:41:51 +0000 (0:00:05.200) 1:08:19.110 ******* 2026-02-02 06:42:00.615346 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:42:00.615350 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 06:42:00.615353 | orchestrator | 2026-02-02 06:42:00.615357 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:42:00.615364 | orchestrator | Monday 02 February 2026 06:41:54 +0000 (0:00:03.069) 1:08:22.179 ******* 2026-02-02 06:42:00.615368 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-02 06:42:00.615373 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:42:00.615378 | orchestrator | 2026-02-02 06:42:00.615384 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-02 06:42:00.615391 | orchestrator | Monday 02 February 2026 06:41:56 +0000 (0:00:01.664) 1:08:23.844 ******* 2026-02-02 06:42:00.615398 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-02-02 06:42:00.615404 | orchestrator | 2026-02-02 06:42:00.615410 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-02 06:42:00.615416 | orchestrator | Monday 02 February 2026 06:41:57 +0000 (0:00:01.168) 1:08:25.013 ******* 2026-02-02 06:42:00.615423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:42:00.615455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:42:00.615460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:42:00.615465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:42:00.615469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:42:00.615474 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:42:00.615478 | orchestrator | 2026-02-02 06:42:00.615483 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-02 06:42:00.615487 | orchestrator | Monday 02 February 2026 06:41:59 +0000 (0:00:01.616) 1:08:26.629 ******* 2026-02-02 06:42:00.615492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:42:00.615496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:42:00.615502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:42:00.615512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:43:07.211561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:43:07.211654 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:43:07.211665 | orchestrator | 2026-02-02 06:43:07.211674 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-02 06:43:07.211683 | orchestrator | Monday 02 February 2026 06:42:00 +0000 (0:00:01.554) 1:08:28.183 ******* 2026-02-02 06:43:07.211691 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:43:07.211699 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:43:07.211707 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:43:07.211714 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:43:07.211723 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:43:07.211750 | orchestrator | 2026-02-02 06:43:07.211758 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-02 06:43:07.211766 | orchestrator | Monday 02 February 2026 06:42:32 +0000 (0:00:31.591) 1:08:59.775 ******* 2026-02-02 06:43:07.211815 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:43:07.211824 | orchestrator | 2026-02-02 06:43:07.211831 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-02 06:43:07.211839 | orchestrator | Monday 02 February 2026 06:42:32 +0000 (0:00:00.768) 1:09:00.544 ******* 2026-02-02 06:43:07.211846 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:43:07.211853 | orchestrator | 2026-02-02 06:43:07.211860 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-02 06:43:07.211868 | orchestrator | Monday 02 February 2026 06:42:33 +0000 (0:00:00.778) 1:09:01.322 ******* 2026-02-02 06:43:07.211875 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-02-02 06:43:07.211883 | orchestrator | 2026-02-02 06:43:07.211890 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-02 06:43:07.211897 | orchestrator | Monday 02 February 2026 06:42:35 +0000 (0:00:01.366) 1:09:02.689 ******* 2026-02-02 06:43:07.211904 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-02-02 06:43:07.211912 | orchestrator | 2026-02-02 06:43:07.211919 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-02 06:43:07.211926 | orchestrator | Monday 02 February 2026 06:42:36 +0000 (0:00:01.153) 1:09:03.843 ******* 2026-02-02 06:43:07.211933 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:43:07.211942 | orchestrator | 2026-02-02 06:43:07.211949 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-02 06:43:07.211956 | orchestrator | Monday 02 February 2026 06:42:38 +0000 (0:00:02.010) 1:09:05.854 ******* 2026-02-02 06:43:07.211963 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:43:07.211971 | orchestrator | 2026-02-02 06:43:07.211978 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-02 06:43:07.211985 | orchestrator | Monday 02 February 2026 06:42:40 +0000 (0:00:01.917) 1:09:07.771 ******* 2026-02-02 06:43:07.211992 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:43:07.211999 | orchestrator | 2026-02-02 06:43:07.212007 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-02 06:43:07.212014 | orchestrator | Monday 02 February 2026 06:42:42 +0000 (0:00:02.164) 1:09:09.935 ******* 2026-02-02 06:43:07.212021 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-02 06:43:07.212028 | orchestrator | 2026-02-02 06:43:07.212036 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-02 06:43:07.212043 | orchestrator | 2026-02-02 06:43:07.212050 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 06:43:07.212057 | orchestrator | Monday 02 February 2026 06:42:45 +0000 (0:00:03.080) 1:09:13.016 ******* 2026-02-02 06:43:07.212064 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-02 06:43:07.212072 | orchestrator | 2026-02-02 06:43:07.212079 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-02 06:43:07.212086 | orchestrator | Monday 02 February 2026 06:42:46 +0000 (0:00:01.132) 1:09:14.148 ******* 2026-02-02 06:43:07.212093 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:07.212100 | orchestrator | 2026-02-02 06:43:07.212107 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-02 06:43:07.212115 | orchestrator | Monday 02 February 2026 06:42:48 +0000 (0:00:01.455) 1:09:15.604 ******* 2026-02-02 06:43:07.212124 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:07.212133 | orchestrator | 2026-02-02 06:43:07.212142 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:43:07.212150 | orchestrator | Monday 02 February 2026 06:42:49 +0000 (0:00:01.185) 1:09:16.790 ******* 2026-02-02 06:43:07.212165 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:07.212173 | orchestrator | 2026-02-02 06:43:07.212182 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:43:07.212191 | orchestrator | Monday 02 February 2026 06:42:50 +0000 (0:00:01.471) 1:09:18.261 ******* 2026-02-02 06:43:07.212199 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:07.212208 | orchestrator | 2026-02-02 06:43:07.212229 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-02 06:43:07.212238 | orchestrator | Monday 02 February 2026 06:42:51 +0000 (0:00:01.171) 1:09:19.432 ******* 2026-02-02 06:43:07.212246 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:07.212255 | orchestrator | 2026-02-02 06:43:07.212263 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-02 06:43:07.212272 | orchestrator | Monday 02 February 2026 06:42:52 +0000 (0:00:01.122) 1:09:20.555 ******* 2026-02-02 06:43:07.212280 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:07.212288 | orchestrator | 2026-02-02 06:43:07.212296 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-02 06:43:07.212305 | orchestrator | Monday 02 February 2026 06:42:54 +0000 (0:00:01.154) 1:09:21.709 ******* 2026-02-02 06:43:07.212313 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:07.212322 | orchestrator | 2026-02-02 06:43:07.212330 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-02 06:43:07.212339 | orchestrator | Monday 02 February 2026 06:42:55 +0000 (0:00:01.196) 1:09:22.906 ******* 2026-02-02 06:43:07.212347 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:07.212355 | orchestrator | 2026-02-02 06:43:07.212363 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-02 06:43:07.212372 | orchestrator | Monday 02 February 2026 06:42:56 +0000 (0:00:01.131) 1:09:24.037 ******* 2026-02-02 06:43:07.212380 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:43:07.212388 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:43:07.212397 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:43:07.212405 | orchestrator | 2026-02-02 06:43:07.212414 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-02 06:43:07.212423 | orchestrator | Monday 02 February 2026 06:42:58 +0000 (0:00:01.691) 1:09:25.729 ******* 2026-02-02 06:43:07.212431 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:07.212439 | orchestrator | 2026-02-02 06:43:07.212448 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-02 06:43:07.212458 | orchestrator | Monday 02 February 2026 06:42:59 +0000 (0:00:01.258) 1:09:26.988 ******* 2026-02-02 06:43:07.212466 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:43:07.212475 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:43:07.212482 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:43:07.212489 | orchestrator | 2026-02-02 06:43:07.212496 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-02 06:43:07.212504 | orchestrator | Monday 02 February 2026 06:43:02 +0000 (0:00:03.238) 1:09:30.226 ******* 2026-02-02 06:43:07.212511 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-02 06:43:07.212519 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-02 06:43:07.212526 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-02 06:43:07.212533 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:07.212540 | orchestrator | 2026-02-02 06:43:07.212548 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-02 06:43:07.212555 | orchestrator | Monday 02 February 2026 06:43:04 +0000 (0:00:01.433) 1:09:31.660 ******* 2026-02-02 06:43:07.212564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-02 06:43:07.212577 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-02 06:43:07.212585 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-02 06:43:07.212593 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:07.212600 | orchestrator | 2026-02-02 06:43:07.212607 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-02 06:43:07.212614 | orchestrator | Monday 02 February 2026 06:43:06 +0000 (0:00:01.983) 1:09:33.644 ******* 2026-02-02 06:43:07.212623 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:07.212637 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:26.511298 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:26.511396 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:26.511408 | orchestrator | 2026-02-02 06:43:26.511416 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-02 06:43:26.511424 | orchestrator | Monday 02 February 2026 06:43:07 +0000 (0:00:01.141) 1:09:34.785 ******* 2026-02-02 06:43:26.511434 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '01c921aa07f8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-02 06:42:59.976124', 'end': '2026-02-02 06:43:00.037944', 'delta': '0:00:00.061820', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['01c921aa07f8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-02 06:43:26.511444 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'c530967d0aad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-02 06:43:00.517792', 'end': '2026-02-02 06:43:00.565397', 'delta': '0:00:00.047605', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c530967d0aad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-02 06:43:26.511469 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'a68c96a70534', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-02 06:43:01.440377', 'end': '2026-02-02 06:43:01.486868', 'delta': '0:00:00.046491', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a68c96a70534'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-02 06:43:26.511476 | orchestrator | 2026-02-02 06:43:26.511483 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-02 06:43:26.511490 | orchestrator | Monday 02 February 2026 06:43:08 +0000 (0:00:01.228) 1:09:36.014 ******* 2026-02-02 06:43:26.511497 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:26.511505 | orchestrator | 2026-02-02 06:43:26.511511 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-02 06:43:26.511518 | orchestrator | Monday 02 February 2026 06:43:10 +0000 (0:00:01.694) 1:09:37.709 ******* 2026-02-02 06:43:26.511524 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:26.511531 | orchestrator | 2026-02-02 06:43:26.511537 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-02 06:43:26.511544 | orchestrator | Monday 02 February 2026 06:43:11 +0000 (0:00:01.209) 1:09:38.919 ******* 2026-02-02 06:43:26.511551 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:26.511557 | orchestrator | 2026-02-02 06:43:26.511564 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-02 06:43:26.511570 | orchestrator | Monday 02 February 2026 06:43:12 +0000 (0:00:01.139) 1:09:40.058 ******* 2026-02-02 06:43:26.511577 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-02 06:43:26.511584 | orchestrator | 2026-02-02 06:43:26.511590 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:43:26.511597 | orchestrator | Monday 02 February 2026 06:43:14 +0000 (0:00:01.987) 1:09:42.046 ******* 2026-02-02 06:43:26.511603 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:26.511610 | orchestrator | 2026-02-02 06:43:26.511617 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-02 06:43:26.511623 | orchestrator | Monday 02 February 2026 06:43:15 +0000 (0:00:01.176) 1:09:43.223 ******* 2026-02-02 06:43:26.511642 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:26.511649 | orchestrator | 2026-02-02 06:43:26.511656 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-02 06:43:26.511663 | orchestrator | Monday 02 February 2026 06:43:16 +0000 (0:00:01.118) 1:09:44.341 ******* 2026-02-02 06:43:26.511670 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:26.511676 | orchestrator | 2026-02-02 06:43:26.511683 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-02 06:43:26.511690 | orchestrator | Monday 02 February 2026 06:43:17 +0000 (0:00:01.200) 1:09:45.542 ******* 2026-02-02 06:43:26.511696 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:26.511703 | orchestrator | 2026-02-02 06:43:26.511710 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-02 06:43:26.511716 | orchestrator | Monday 02 February 2026 06:43:19 +0000 (0:00:01.154) 1:09:46.696 ******* 2026-02-02 06:43:26.511723 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:26.511729 | orchestrator | 2026-02-02 06:43:26.511736 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-02 06:43:26.511743 | orchestrator | Monday 02 February 2026 06:43:20 +0000 (0:00:01.135) 1:09:47.832 ******* 2026-02-02 06:43:26.511756 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:26.511762 | orchestrator | 2026-02-02 06:43:26.511769 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-02 06:43:26.511776 | orchestrator | Monday 02 February 2026 06:43:21 +0000 (0:00:01.222) 1:09:49.054 ******* 2026-02-02 06:43:26.511810 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:26.511816 | orchestrator | 2026-02-02 06:43:26.511824 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-02 06:43:26.511831 | orchestrator | Monday 02 February 2026 06:43:22 +0000 (0:00:01.155) 1:09:50.210 ******* 2026-02-02 06:43:26.511840 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:26.511847 | orchestrator | 2026-02-02 06:43:26.511855 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-02 06:43:26.511863 | orchestrator | Monday 02 February 2026 06:43:23 +0000 (0:00:01.188) 1:09:51.398 ******* 2026-02-02 06:43:26.511871 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:26.511879 | orchestrator | 2026-02-02 06:43:26.511887 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-02 06:43:26.511895 | orchestrator | Monday 02 February 2026 06:43:24 +0000 (0:00:01.113) 1:09:52.512 ******* 2026-02-02 06:43:26.511962 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:26.511971 | orchestrator | 2026-02-02 06:43:26.511979 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-02 06:43:26.511986 | orchestrator | Monday 02 February 2026 06:43:26 +0000 (0:00:01.267) 1:09:53.780 ******* 2026-02-02 06:43:26.511995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:43:26.512004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6', 'dm-uuid-LVM-o4NjfQidgd0d8Dt2ERSF2CVjMcc1iNdF2FL70XUBfeOz8qjNKOcDK13w6fcJ9Hta'], 'uuids': ['7d002011-c2d2-4478-8516-4cfbbdeaec0b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta']}})  2026-02-02 06:43:26.512014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359', 'scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e969e129', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:43:26.512031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qjdzC2-uhmD-TpwQ-o3eu-AERk-xIpn-IuLEqz', 'scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40', 'scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f']}})  2026-02-02 06:43:27.682275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:43:27.682430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:43:27.682446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-02 06:43:27.682458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:43:27.682466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs', 'dm-uuid-CRYPT-LUKS2-756889fb99344894803ed86e669bebbd-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:43:27.682474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:43:27.682482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f', 'dm-uuid-LVM-oyVS0lpzZeiZxxmfRvad67kbexmRBG5IWJAtRWtNBygZ9yUEjcaaQoSOl1TBvsQs'], 'uuids': ['756889fb-9934-4894-803e-d86e669bebbd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs']}})  2026-02-02 06:43:27.682506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-etyEN7-O4pu-QliJ-NKxv-0HLx-jIcx-JGZ0d7', 'scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b', 'scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6']}})  2026-02-02 06:43:27.682536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:43:27.682547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2a7e3dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-02 06:43:27.682556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:43:27.682564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-02 06:43:27.682583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta', 'dm-uuid-CRYPT-LUKS2-7d002011c2d2447885164cfbbdeaec0b-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-02 06:43:27.900516 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:27.900615 | orchestrator | 2026-02-02 06:43:27.900631 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-02 06:43:27.900643 | orchestrator | Monday 02 February 2026 06:43:27 +0000 (0:00:01.476) 1:09:55.256 ******* 2026-02-02 06:43:27.900658 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:27.900674 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6', 'dm-uuid-LVM-o4NjfQidgd0d8Dt2ERSF2CVjMcc1iNdF2FL70XUBfeOz8qjNKOcDK13w6fcJ9Hta'], 'uuids': ['7d002011-c2d2-4478-8516-4cfbbdeaec0b'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:27.900687 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359', 'scsi-SQEMU_QEMU_HARDDISK_e969e129-18ea-460f-85bc-8dfb49c82359'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e969e129', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:27.900700 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-qjdzC2-uhmD-TpwQ-o3eu-AERk-xIpn-IuLEqz', 'scsi-0QEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40', 'scsi-SQEMU_QEMU_HARDDISK_bc39994b-92aa-40f2-807e-6457f6f8ea40'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:27.900759 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:27.900773 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:27.900847 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-02-02-14-24-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:27.900861 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:27.900872 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs', 'dm-uuid-CRYPT-LUKS2-756889fb99344894803ed86e669bebbd-WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:27.900884 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:27.900913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d54a22ee--8606--5662--853b--b39e232caa8f-osd--block--d54a22ee--8606--5662--853b--b39e232caa8f', 'dm-uuid-LVM-oyVS0lpzZeiZxxmfRvad67kbexmRBG5IWJAtRWtNBygZ9yUEjcaaQoSOl1TBvsQs'], 'uuids': ['756889fb-9934-4894-803e-d86e669bebbd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bc39994b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WJAtRW-tNBy-gZ9y-UEjc-aaQo-SOl1-TBvsQs']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:41.176004 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-etyEN7-O4pu-QliJ-NKxv-0HLx-jIcx-JGZ0d7', 'scsi-0QEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b', 'scsi-SQEMU_QEMU_HARDDISK_10248bd5-0286-487e-81b0-791c797cb21b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10248bd5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e4fc6918--1796--5a48--9994--5f31e91196e6-osd--block--e4fc6918--1796--5a48--9994--5f31e91196e6']}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:41.176127 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:41.176145 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2a7e3dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1', 'scsi-SQEMU_QEMU_HARDDISK_a2a7e3dd-293e-4828-91d3-e84de9ff6d73-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:41.176221 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:41.176247 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:41.176270 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta', 'dm-uuid-CRYPT-LUKS2-7d002011c2d2447885164cfbbdeaec0b-2FL70X-UBfe-Oz8q-jNKO-cDK1-3w6f-cJ9Hta'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-02 06:43:41.176287 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:41.176300 | orchestrator | 2026-02-02 06:43:41.176313 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-02 06:43:41.176324 | orchestrator | Monday 02 February 2026 06:43:29 +0000 (0:00:01.414) 1:09:56.671 ******* 2026-02-02 06:43:41.176335 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:41.176347 | orchestrator | 2026-02-02 06:43:41.176358 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-02 06:43:41.176368 | orchestrator | Monday 02 February 2026 06:43:30 +0000 (0:00:01.520) 1:09:58.192 ******* 2026-02-02 06:43:41.176379 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:41.176390 | orchestrator | 2026-02-02 06:43:41.176401 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:43:41.176421 | orchestrator | Monday 02 February 2026 06:43:31 +0000 (0:00:01.152) 1:09:59.344 ******* 2026-02-02 06:43:41.176432 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:43:41.176443 | orchestrator | 2026-02-02 06:43:41.176454 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:43:41.176464 | orchestrator | Monday 02 February 2026 06:43:33 +0000 (0:00:01.512) 1:10:00.857 ******* 2026-02-02 06:43:41.176475 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:41.176486 | orchestrator | 2026-02-02 06:43:41.176497 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-02 06:43:41.176507 | orchestrator | Monday 02 February 2026 06:43:34 +0000 (0:00:01.142) 1:10:01.999 ******* 2026-02-02 06:43:41.176518 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:41.176530 | orchestrator | 2026-02-02 06:43:41.176543 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-02 06:43:41.176555 | orchestrator | Monday 02 February 2026 06:43:35 +0000 (0:00:01.213) 1:10:03.212 ******* 2026-02-02 06:43:41.176568 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:41.176580 | orchestrator | 2026-02-02 06:43:41.176592 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-02 06:43:41.176605 | orchestrator | Monday 02 February 2026 06:43:36 +0000 (0:00:01.228) 1:10:04.441 ******* 2026-02-02 06:43:41.176617 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-02 06:43:41.176630 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-02 06:43:41.176643 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-02 06:43:41.176654 | orchestrator | 2026-02-02 06:43:41.176664 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-02 06:43:41.176675 | orchestrator | Monday 02 February 2026 06:43:38 +0000 (0:00:02.015) 1:10:06.456 ******* 2026-02-02 06:43:41.176686 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-02 06:43:41.176696 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-02 06:43:41.176707 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-02 06:43:41.176718 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:43:41.176728 | orchestrator | 2026-02-02 06:43:41.176739 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-02 06:43:41.176750 | orchestrator | Monday 02 February 2026 06:43:40 +0000 (0:00:01.150) 1:10:07.607 ******* 2026-02-02 06:43:41.176760 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-02 06:43:41.176772 | orchestrator | 2026-02-02 06:43:41.176825 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:44:23.095447 | orchestrator | Monday 02 February 2026 06:43:41 +0000 (0:00:01.139) 1:10:08.746 ******* 2026-02-02 06:44:23.095573 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:44:23.095595 | orchestrator | 2026-02-02 06:44:23.095610 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:44:23.095623 | orchestrator | Monday 02 February 2026 06:43:42 +0000 (0:00:01.222) 1:10:09.969 ******* 2026-02-02 06:44:23.095637 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:44:23.095651 | orchestrator | 2026-02-02 06:44:23.095665 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:44:23.095678 | orchestrator | Monday 02 February 2026 06:43:43 +0000 (0:00:01.194) 1:10:11.163 ******* 2026-02-02 06:44:23.095692 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:44:23.095704 | orchestrator | 2026-02-02 06:44:23.095716 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:44:23.095729 | orchestrator | Monday 02 February 2026 06:43:44 +0000 (0:00:01.184) 1:10:12.348 ******* 2026-02-02 06:44:23.095742 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:44:23.095758 | orchestrator | 2026-02-02 06:44:23.095771 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:44:23.095883 | orchestrator | Monday 02 February 2026 06:43:46 +0000 (0:00:01.320) 1:10:13.668 ******* 2026-02-02 06:44:23.095906 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:44:23.095922 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:44:23.095936 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:44:23.095950 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:44:23.095964 | orchestrator | 2026-02-02 06:44:23.095978 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:44:23.095994 | orchestrator | Monday 02 February 2026 06:43:47 +0000 (0:00:01.425) 1:10:15.094 ******* 2026-02-02 06:44:23.096010 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:44:23.096026 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:44:23.096039 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:44:23.096055 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:44:23.096070 | orchestrator | 2026-02-02 06:44:23.096086 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:44:23.096101 | orchestrator | Monday 02 February 2026 06:43:48 +0000 (0:00:01.383) 1:10:16.477 ******* 2026-02-02 06:44:23.096115 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:44:23.096131 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:44:23.096147 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:44:23.096162 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:44:23.096177 | orchestrator | 2026-02-02 06:44:23.096192 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:44:23.096207 | orchestrator | Monday 02 February 2026 06:43:50 +0000 (0:00:01.444) 1:10:17.921 ******* 2026-02-02 06:44:23.096222 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:44:23.096237 | orchestrator | 2026-02-02 06:44:23.096252 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:44:23.096267 | orchestrator | Monday 02 February 2026 06:43:51 +0000 (0:00:01.163) 1:10:19.085 ******* 2026-02-02 06:44:23.096280 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-02 06:44:23.096294 | orchestrator | 2026-02-02 06:44:23.096307 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-02 06:44:23.096321 | orchestrator | Monday 02 February 2026 06:43:52 +0000 (0:00:01.322) 1:10:20.407 ******* 2026-02-02 06:44:23.096334 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:44:23.096349 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:44:23.096363 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:44:23.096376 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:44:23.096389 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:44:23.096403 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-02 06:44:23.096417 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:44:23.096431 | orchestrator | 2026-02-02 06:44:23.096444 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-02 06:44:23.096457 | orchestrator | Monday 02 February 2026 06:43:55 +0000 (0:00:02.176) 1:10:22.584 ******* 2026-02-02 06:44:23.096471 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-02 06:44:23.096484 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-02 06:44:23.096498 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-02 06:44:23.096512 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-02 06:44:23.096536 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-02 06:44:23.096550 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-02 06:44:23.096563 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-02 06:44:23.096577 | orchestrator | 2026-02-02 06:44:23.096591 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-02 06:44:23.096605 | orchestrator | Monday 02 February 2026 06:43:57 +0000 (0:00:02.267) 1:10:24.852 ******* 2026-02-02 06:44:23.096619 | orchestrator | changed: [testbed-node-5] 2026-02-02 06:44:23.096632 | orchestrator | 2026-02-02 06:44:23.096671 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-02 06:44:23.096685 | orchestrator | Monday 02 February 2026 06:43:59 +0000 (0:00:01.959) 1:10:26.811 ******* 2026-02-02 06:44:23.096699 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 06:44:23.096714 | orchestrator | 2026-02-02 06:44:23.096729 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-02 06:44:23.096742 | orchestrator | Monday 02 February 2026 06:44:01 +0000 (0:00:02.596) 1:10:29.408 ******* 2026-02-02 06:44:23.096755 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 06:44:23.096769 | orchestrator | 2026-02-02 06:44:23.096784 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 06:44:23.096861 | orchestrator | Monday 02 February 2026 06:44:03 +0000 (0:00:01.880) 1:10:31.288 ******* 2026-02-02 06:44:23.096880 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-02 06:44:23.096892 | orchestrator | 2026-02-02 06:44:23.096906 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 06:44:23.096936 | orchestrator | Monday 02 February 2026 06:44:04 +0000 (0:00:01.145) 1:10:32.434 ******* 2026-02-02 06:44:23.096952 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-02 06:44:23.096967 | orchestrator | 2026-02-02 06:44:23.096981 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 06:44:23.096996 | orchestrator | Monday 02 February 2026 06:44:05 +0000 (0:00:01.146) 1:10:33.580 ******* 2026-02-02 06:44:23.097010 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:44:23.097025 | orchestrator | 2026-02-02 06:44:23.097039 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 06:44:23.097052 | orchestrator | Monday 02 February 2026 06:44:07 +0000 (0:00:01.113) 1:10:34.694 ******* 2026-02-02 06:44:23.097066 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:44:23.097080 | orchestrator | 2026-02-02 06:44:23.097095 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 06:44:23.097109 | orchestrator | Monday 02 February 2026 06:44:08 +0000 (0:00:01.564) 1:10:36.258 ******* 2026-02-02 06:44:23.097123 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:44:23.097136 | orchestrator | 2026-02-02 06:44:23.097151 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 06:44:23.097164 | orchestrator | Monday 02 February 2026 06:44:10 +0000 (0:00:01.525) 1:10:37.783 ******* 2026-02-02 06:44:23.097178 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:44:23.097193 | orchestrator | 2026-02-02 06:44:23.097205 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 06:44:23.097218 | orchestrator | Monday 02 February 2026 06:44:11 +0000 (0:00:01.514) 1:10:39.298 ******* 2026-02-02 06:44:23.097232 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:44:23.097245 | orchestrator | 2026-02-02 06:44:23.097260 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 06:44:23.097273 | orchestrator | Monday 02 February 2026 06:44:12 +0000 (0:00:01.143) 1:10:40.442 ******* 2026-02-02 06:44:23.097287 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:44:23.097328 | orchestrator | 2026-02-02 06:44:23.097343 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 06:44:23.097356 | orchestrator | Monday 02 February 2026 06:44:13 +0000 (0:00:01.139) 1:10:41.581 ******* 2026-02-02 06:44:23.097369 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:44:23.097381 | orchestrator | 2026-02-02 06:44:23.097394 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 06:44:23.097406 | orchestrator | Monday 02 February 2026 06:44:15 +0000 (0:00:01.132) 1:10:42.714 ******* 2026-02-02 06:44:23.097418 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:44:23.097429 | orchestrator | 2026-02-02 06:44:23.097440 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 06:44:23.097451 | orchestrator | Monday 02 February 2026 06:44:16 +0000 (0:00:01.608) 1:10:44.322 ******* 2026-02-02 06:44:23.097463 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:44:23.097474 | orchestrator | 2026-02-02 06:44:23.097486 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 06:44:23.097499 | orchestrator | Monday 02 February 2026 06:44:18 +0000 (0:00:01.520) 1:10:45.843 ******* 2026-02-02 06:44:23.097511 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:44:23.097523 | orchestrator | 2026-02-02 06:44:23.097535 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 06:44:23.097546 | orchestrator | Monday 02 February 2026 06:44:19 +0000 (0:00:00.814) 1:10:46.658 ******* 2026-02-02 06:44:23.097558 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:44:23.097570 | orchestrator | 2026-02-02 06:44:23.097583 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 06:44:23.097596 | orchestrator | Monday 02 February 2026 06:44:19 +0000 (0:00:00.809) 1:10:47.467 ******* 2026-02-02 06:44:23.097608 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:44:23.097620 | orchestrator | 2026-02-02 06:44:23.097632 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 06:44:23.097644 | orchestrator | Monday 02 February 2026 06:44:20 +0000 (0:00:00.794) 1:10:48.261 ******* 2026-02-02 06:44:23.097656 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:44:23.097668 | orchestrator | 2026-02-02 06:44:23.097680 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 06:44:23.097693 | orchestrator | Monday 02 February 2026 06:44:21 +0000 (0:00:00.824) 1:10:49.086 ******* 2026-02-02 06:44:23.097706 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:44:23.097718 | orchestrator | 2026-02-02 06:44:23.097730 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 06:44:23.097742 | orchestrator | Monday 02 February 2026 06:44:22 +0000 (0:00:00.813) 1:10:49.899 ******* 2026-02-02 06:44:23.097754 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:44:23.097765 | orchestrator | 2026-02-02 06:44:23.097791 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 06:45:03.539275 | orchestrator | Monday 02 February 2026 06:44:23 +0000 (0:00:00.768) 1:10:50.667 ******* 2026-02-02 06:45:03.539390 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.539407 | orchestrator | 2026-02-02 06:45:03.539420 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 06:45:03.539432 | orchestrator | Monday 02 February 2026 06:44:23 +0000 (0:00:00.830) 1:10:51.498 ******* 2026-02-02 06:45:03.539452 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.539472 | orchestrator | 2026-02-02 06:45:03.539492 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 06:45:03.539513 | orchestrator | Monday 02 February 2026 06:44:24 +0000 (0:00:00.774) 1:10:52.272 ******* 2026-02-02 06:45:03.539533 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:45:03.539554 | orchestrator | 2026-02-02 06:45:03.539566 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 06:45:03.539577 | orchestrator | Monday 02 February 2026 06:44:25 +0000 (0:00:00.812) 1:10:53.085 ******* 2026-02-02 06:45:03.539588 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:45:03.539620 | orchestrator | 2026-02-02 06:45:03.539632 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-02 06:45:03.539643 | orchestrator | Monday 02 February 2026 06:44:26 +0000 (0:00:00.860) 1:10:53.946 ******* 2026-02-02 06:45:03.539653 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.539664 | orchestrator | 2026-02-02 06:45:03.539675 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-02 06:45:03.539685 | orchestrator | Monday 02 February 2026 06:44:27 +0000 (0:00:00.911) 1:10:54.857 ******* 2026-02-02 06:45:03.539696 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.539707 | orchestrator | 2026-02-02 06:45:03.539717 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-02 06:45:03.539728 | orchestrator | Monday 02 February 2026 06:44:28 +0000 (0:00:00.744) 1:10:55.602 ******* 2026-02-02 06:45:03.539739 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.539749 | orchestrator | 2026-02-02 06:45:03.539760 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-02 06:45:03.539771 | orchestrator | Monday 02 February 2026 06:44:28 +0000 (0:00:00.766) 1:10:56.368 ******* 2026-02-02 06:45:03.539781 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.539792 | orchestrator | 2026-02-02 06:45:03.539804 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-02 06:45:03.539852 | orchestrator | Monday 02 February 2026 06:44:29 +0000 (0:00:00.763) 1:10:57.132 ******* 2026-02-02 06:45:03.539871 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.539889 | orchestrator | 2026-02-02 06:45:03.539907 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-02 06:45:03.539925 | orchestrator | Monday 02 February 2026 06:44:30 +0000 (0:00:00.796) 1:10:57.928 ******* 2026-02-02 06:45:03.539942 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.539960 | orchestrator | 2026-02-02 06:45:03.539978 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-02 06:45:03.539996 | orchestrator | Monday 02 February 2026 06:44:31 +0000 (0:00:00.813) 1:10:58.741 ******* 2026-02-02 06:45:03.540015 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.540034 | orchestrator | 2026-02-02 06:45:03.540052 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-02 06:45:03.540072 | orchestrator | Monday 02 February 2026 06:44:31 +0000 (0:00:00.769) 1:10:59.511 ******* 2026-02-02 06:45:03.540090 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.540109 | orchestrator | 2026-02-02 06:45:03.540128 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-02 06:45:03.540149 | orchestrator | Monday 02 February 2026 06:44:32 +0000 (0:00:00.779) 1:11:00.290 ******* 2026-02-02 06:45:03.540167 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.540186 | orchestrator | 2026-02-02 06:45:03.540205 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-02 06:45:03.540224 | orchestrator | Monday 02 February 2026 06:44:33 +0000 (0:00:00.780) 1:11:01.071 ******* 2026-02-02 06:45:03.540242 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.540260 | orchestrator | 2026-02-02 06:45:03.540277 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-02 06:45:03.540294 | orchestrator | Monday 02 February 2026 06:44:34 +0000 (0:00:00.774) 1:11:01.846 ******* 2026-02-02 06:45:03.540311 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.540329 | orchestrator | 2026-02-02 06:45:03.540348 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-02 06:45:03.540366 | orchestrator | Monday 02 February 2026 06:44:35 +0000 (0:00:00.767) 1:11:02.614 ******* 2026-02-02 06:45:03.540385 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.540404 | orchestrator | 2026-02-02 06:45:03.540423 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-02 06:45:03.540441 | orchestrator | Monday 02 February 2026 06:44:35 +0000 (0:00:00.758) 1:11:03.372 ******* 2026-02-02 06:45:03.540472 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:45:03.540484 | orchestrator | 2026-02-02 06:45:03.540495 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-02 06:45:03.540506 | orchestrator | Monday 02 February 2026 06:44:37 +0000 (0:00:01.611) 1:11:04.983 ******* 2026-02-02 06:45:03.540517 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:45:03.540527 | orchestrator | 2026-02-02 06:45:03.540538 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-02 06:45:03.540549 | orchestrator | Monday 02 February 2026 06:44:39 +0000 (0:00:02.070) 1:11:07.054 ******* 2026-02-02 06:45:03.540561 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-02 06:45:03.540572 | orchestrator | 2026-02-02 06:45:03.540583 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-02 06:45:03.540594 | orchestrator | Monday 02 February 2026 06:44:40 +0000 (0:00:01.117) 1:11:08.171 ******* 2026-02-02 06:45:03.540605 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.540616 | orchestrator | 2026-02-02 06:45:03.540627 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-02 06:45:03.540659 | orchestrator | Monday 02 February 2026 06:44:41 +0000 (0:00:01.151) 1:11:09.323 ******* 2026-02-02 06:45:03.540671 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.540681 | orchestrator | 2026-02-02 06:45:03.540692 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-02 06:45:03.540703 | orchestrator | Monday 02 February 2026 06:44:42 +0000 (0:00:01.108) 1:11:10.432 ******* 2026-02-02 06:45:03.540714 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-02 06:45:03.540725 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-02 06:45:03.540735 | orchestrator | 2026-02-02 06:45:03.540746 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-02 06:45:03.540757 | orchestrator | Monday 02 February 2026 06:44:44 +0000 (0:00:01.847) 1:11:12.279 ******* 2026-02-02 06:45:03.540768 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:45:03.540778 | orchestrator | 2026-02-02 06:45:03.540789 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-02 06:45:03.540800 | orchestrator | Monday 02 February 2026 06:44:46 +0000 (0:00:01.443) 1:11:13.723 ******* 2026-02-02 06:45:03.540840 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.540860 | orchestrator | 2026-02-02 06:45:03.540878 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-02 06:45:03.540896 | orchestrator | Monday 02 February 2026 06:44:47 +0000 (0:00:01.138) 1:11:14.861 ******* 2026-02-02 06:45:03.540914 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.540932 | orchestrator | 2026-02-02 06:45:03.540952 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-02 06:45:03.540971 | orchestrator | Monday 02 February 2026 06:44:48 +0000 (0:00:00.807) 1:11:15.669 ******* 2026-02-02 06:45:03.540988 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.541008 | orchestrator | 2026-02-02 06:45:03.541020 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-02 06:45:03.541031 | orchestrator | Monday 02 February 2026 06:44:48 +0000 (0:00:00.817) 1:11:16.487 ******* 2026-02-02 06:45:03.541042 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-02 06:45:03.541053 | orchestrator | 2026-02-02 06:45:03.541063 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-02 06:45:03.541074 | orchestrator | Monday 02 February 2026 06:44:50 +0000 (0:00:01.149) 1:11:17.637 ******* 2026-02-02 06:45:03.541085 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:45:03.541096 | orchestrator | 2026-02-02 06:45:03.541107 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-02 06:45:03.541117 | orchestrator | Monday 02 February 2026 06:44:51 +0000 (0:00:01.711) 1:11:19.348 ******* 2026-02-02 06:45:03.541138 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-02 06:45:03.541149 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-02 06:45:03.541160 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-02 06:45:03.541170 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.541181 | orchestrator | 2026-02-02 06:45:03.541192 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-02 06:45:03.541203 | orchestrator | Monday 02 February 2026 06:44:52 +0000 (0:00:01.213) 1:11:20.562 ******* 2026-02-02 06:45:03.541213 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.541224 | orchestrator | 2026-02-02 06:45:03.541234 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-02 06:45:03.541245 | orchestrator | Monday 02 February 2026 06:44:54 +0000 (0:00:01.093) 1:11:21.655 ******* 2026-02-02 06:45:03.541256 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.541267 | orchestrator | 2026-02-02 06:45:03.541277 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-02 06:45:03.541288 | orchestrator | Monday 02 February 2026 06:44:55 +0000 (0:00:01.144) 1:11:22.800 ******* 2026-02-02 06:45:03.541299 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.541309 | orchestrator | 2026-02-02 06:45:03.541320 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-02 06:45:03.541331 | orchestrator | Monday 02 February 2026 06:44:56 +0000 (0:00:01.146) 1:11:23.947 ******* 2026-02-02 06:45:03.541341 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.541352 | orchestrator | 2026-02-02 06:45:03.541363 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-02 06:45:03.541373 | orchestrator | Monday 02 February 2026 06:44:57 +0000 (0:00:01.120) 1:11:25.068 ******* 2026-02-02 06:45:03.541384 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.541395 | orchestrator | 2026-02-02 06:45:03.541405 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-02 06:45:03.541416 | orchestrator | Monday 02 February 2026 06:44:58 +0000 (0:00:00.776) 1:11:25.844 ******* 2026-02-02 06:45:03.541427 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:45:03.541437 | orchestrator | 2026-02-02 06:45:03.541448 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-02 06:45:03.541459 | orchestrator | Monday 02 February 2026 06:45:00 +0000 (0:00:02.094) 1:11:27.939 ******* 2026-02-02 06:45:03.541469 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:45:03.541480 | orchestrator | 2026-02-02 06:45:03.541491 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-02 06:45:03.541501 | orchestrator | Monday 02 February 2026 06:45:01 +0000 (0:00:00.827) 1:11:28.767 ******* 2026-02-02 06:45:03.541512 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-02 06:45:03.541523 | orchestrator | 2026-02-02 06:45:03.541533 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-02 06:45:03.541544 | orchestrator | Monday 02 February 2026 06:45:02 +0000 (0:00:01.172) 1:11:29.939 ******* 2026-02-02 06:45:03.541555 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:03.541566 | orchestrator | 2026-02-02 06:45:03.541576 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-02 06:45:03.541596 | orchestrator | Monday 02 February 2026 06:45:03 +0000 (0:00:01.169) 1:11:31.108 ******* 2026-02-02 06:45:44.989341 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.989453 | orchestrator | 2026-02-02 06:45:44.989471 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-02 06:45:44.989485 | orchestrator | Monday 02 February 2026 06:45:04 +0000 (0:00:01.236) 1:11:32.346 ******* 2026-02-02 06:45:44.989497 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.989508 | orchestrator | 2026-02-02 06:45:44.989520 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-02 06:45:44.989531 | orchestrator | Monday 02 February 2026 06:45:05 +0000 (0:00:01.203) 1:11:33.549 ******* 2026-02-02 06:45:44.989565 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.989582 | orchestrator | 2026-02-02 06:45:44.989601 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-02 06:45:44.989612 | orchestrator | Monday 02 February 2026 06:45:07 +0000 (0:00:01.231) 1:11:34.781 ******* 2026-02-02 06:45:44.989623 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.989634 | orchestrator | 2026-02-02 06:45:44.989644 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-02 06:45:44.989655 | orchestrator | Monday 02 February 2026 06:45:08 +0000 (0:00:01.133) 1:11:35.914 ******* 2026-02-02 06:45:44.989666 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.989676 | orchestrator | 2026-02-02 06:45:44.989687 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-02 06:45:44.989698 | orchestrator | Monday 02 February 2026 06:45:09 +0000 (0:00:01.165) 1:11:37.080 ******* 2026-02-02 06:45:44.989709 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.989719 | orchestrator | 2026-02-02 06:45:44.989730 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-02 06:45:44.989741 | orchestrator | Monday 02 February 2026 06:45:10 +0000 (0:00:01.162) 1:11:38.242 ******* 2026-02-02 06:45:44.989751 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.989762 | orchestrator | 2026-02-02 06:45:44.989773 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-02 06:45:44.989784 | orchestrator | Monday 02 February 2026 06:45:11 +0000 (0:00:01.128) 1:11:39.371 ******* 2026-02-02 06:45:44.989794 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:45:44.989806 | orchestrator | 2026-02-02 06:45:44.989817 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-02 06:45:44.989827 | orchestrator | Monday 02 February 2026 06:45:12 +0000 (0:00:00.776) 1:11:40.147 ******* 2026-02-02 06:45:44.989867 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-02 06:45:44.989879 | orchestrator | 2026-02-02 06:45:44.989890 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-02 06:45:44.989901 | orchestrator | Monday 02 February 2026 06:45:13 +0000 (0:00:01.105) 1:11:41.252 ******* 2026-02-02 06:45:44.989912 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-02 06:45:44.989923 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-02 06:45:44.989946 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-02 06:45:44.989957 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-02 06:45:44.989967 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-02 06:45:44.989978 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-02 06:45:44.989988 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-02 06:45:44.990008 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-02 06:45:44.990078 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-02 06:45:44.990090 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-02 06:45:44.990100 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-02 06:45:44.990111 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-02 06:45:44.990122 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-02 06:45:44.990133 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-02 06:45:44.990153 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-02 06:45:44.990164 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-02 06:45:44.990176 | orchestrator | 2026-02-02 06:45:44.990187 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-02 06:45:44.990198 | orchestrator | Monday 02 February 2026 06:45:19 +0000 (0:00:06.081) 1:11:47.334 ******* 2026-02-02 06:45:44.990218 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-02 06:45:44.990230 | orchestrator | 2026-02-02 06:45:44.990240 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-02 06:45:44.990251 | orchestrator | Monday 02 February 2026 06:45:20 +0000 (0:00:01.141) 1:11:48.476 ******* 2026-02-02 06:45:44.990262 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 06:45:44.990274 | orchestrator | 2026-02-02 06:45:44.990285 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-02 06:45:44.990296 | orchestrator | Monday 02 February 2026 06:45:22 +0000 (0:00:01.517) 1:11:49.994 ******* 2026-02-02 06:45:44.990307 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 06:45:44.990318 | orchestrator | 2026-02-02 06:45:44.990329 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-02 06:45:44.990340 | orchestrator | Monday 02 February 2026 06:45:24 +0000 (0:00:01.686) 1:11:51.680 ******* 2026-02-02 06:45:44.990350 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.990361 | orchestrator | 2026-02-02 06:45:44.990372 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-02 06:45:44.990401 | orchestrator | Monday 02 February 2026 06:45:24 +0000 (0:00:00.886) 1:11:52.567 ******* 2026-02-02 06:45:44.990413 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.990424 | orchestrator | 2026-02-02 06:45:44.990435 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-02 06:45:44.990445 | orchestrator | Monday 02 February 2026 06:45:25 +0000 (0:00:00.771) 1:11:53.338 ******* 2026-02-02 06:45:44.990456 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.990467 | orchestrator | 2026-02-02 06:45:44.990478 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-02 06:45:44.990489 | orchestrator | Monday 02 February 2026 06:45:26 +0000 (0:00:00.779) 1:11:54.118 ******* 2026-02-02 06:45:44.990500 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.990510 | orchestrator | 2026-02-02 06:45:44.990521 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-02 06:45:44.990532 | orchestrator | Monday 02 February 2026 06:45:27 +0000 (0:00:00.770) 1:11:54.888 ******* 2026-02-02 06:45:44.990543 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.990554 | orchestrator | 2026-02-02 06:45:44.990565 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-02 06:45:44.990576 | orchestrator | Monday 02 February 2026 06:45:28 +0000 (0:00:00.751) 1:11:55.640 ******* 2026-02-02 06:45:44.990587 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.990598 | orchestrator | 2026-02-02 06:45:44.990609 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-02 06:45:44.990620 | orchestrator | Monday 02 February 2026 06:45:28 +0000 (0:00:00.778) 1:11:56.419 ******* 2026-02-02 06:45:44.990631 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.990641 | orchestrator | 2026-02-02 06:45:44.990652 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-02 06:45:44.990663 | orchestrator | Monday 02 February 2026 06:45:29 +0000 (0:00:00.768) 1:11:57.187 ******* 2026-02-02 06:45:44.990674 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.990685 | orchestrator | 2026-02-02 06:45:44.990696 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-02 06:45:44.990707 | orchestrator | Monday 02 February 2026 06:45:30 +0000 (0:00:00.778) 1:11:57.966 ******* 2026-02-02 06:45:44.990717 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.990728 | orchestrator | 2026-02-02 06:45:44.990739 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-02 06:45:44.990750 | orchestrator | Monday 02 February 2026 06:45:31 +0000 (0:00:00.815) 1:11:58.781 ******* 2026-02-02 06:45:44.990768 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.990779 | orchestrator | 2026-02-02 06:45:44.990790 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-02 06:45:44.990801 | orchestrator | Monday 02 February 2026 06:45:31 +0000 (0:00:00.777) 1:11:59.559 ******* 2026-02-02 06:45:44.990811 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.990822 | orchestrator | 2026-02-02 06:45:44.990853 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-02 06:45:44.990865 | orchestrator | Monday 02 February 2026 06:45:32 +0000 (0:00:00.755) 1:12:00.314 ******* 2026-02-02 06:45:44.990875 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-02 06:45:44.990886 | orchestrator | 2026-02-02 06:45:44.990897 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-02 06:45:44.990908 | orchestrator | Monday 02 February 2026 06:45:36 +0000 (0:00:04.048) 1:12:04.363 ******* 2026-02-02 06:45:44.990919 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 06:45:44.990930 | orchestrator | 2026-02-02 06:45:44.990940 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-02 06:45:44.990951 | orchestrator | Monday 02 February 2026 06:45:37 +0000 (0:00:00.820) 1:12:05.183 ******* 2026-02-02 06:45:44.990965 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-02 06:45:44.990979 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-02 06:45:44.990991 | orchestrator | 2026-02-02 06:45:44.991002 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-02 06:45:44.991012 | orchestrator | Monday 02 February 2026 06:45:42 +0000 (0:00:04.921) 1:12:10.105 ******* 2026-02-02 06:45:44.991023 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.991034 | orchestrator | 2026-02-02 06:45:44.991044 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-02 06:45:44.991055 | orchestrator | Monday 02 February 2026 06:45:43 +0000 (0:00:00.834) 1:12:10.939 ******* 2026-02-02 06:45:44.991066 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.991085 | orchestrator | 2026-02-02 06:45:44.991102 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-02 06:45:44.991121 | orchestrator | Monday 02 February 2026 06:45:44 +0000 (0:00:00.790) 1:12:11.729 ******* 2026-02-02 06:45:44.991140 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:45:44.991158 | orchestrator | 2026-02-02 06:45:44.991176 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-02 06:45:44.991196 | orchestrator | Monday 02 February 2026 06:45:44 +0000 (0:00:00.826) 1:12:12.555 ******* 2026-02-02 06:46:49.153151 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:46:49.153244 | orchestrator | 2026-02-02 06:46:49.153255 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-02 06:46:49.153263 | orchestrator | Monday 02 February 2026 06:45:45 +0000 (0:00:00.777) 1:12:13.333 ******* 2026-02-02 06:46:49.153271 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:46:49.153278 | orchestrator | 2026-02-02 06:46:49.153285 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-02 06:46:49.153292 | orchestrator | Monday 02 February 2026 06:45:46 +0000 (0:00:00.800) 1:12:14.134 ******* 2026-02-02 06:46:49.153319 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:46:49.153327 | orchestrator | 2026-02-02 06:46:49.153333 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-02 06:46:49.153340 | orchestrator | Monday 02 February 2026 06:45:47 +0000 (0:00:00.894) 1:12:15.028 ******* 2026-02-02 06:46:49.153347 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:46:49.153354 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:46:49.153361 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:46:49.153367 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:46:49.153374 | orchestrator | 2026-02-02 06:46:49.153380 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-02 06:46:49.153387 | orchestrator | Monday 02 February 2026 06:45:48 +0000 (0:00:01.106) 1:12:16.134 ******* 2026-02-02 06:46:49.153394 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:46:49.153400 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:46:49.153407 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:46:49.153413 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:46:49.153420 | orchestrator | 2026-02-02 06:46:49.153427 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-02 06:46:49.153433 | orchestrator | Monday 02 February 2026 06:45:49 +0000 (0:00:01.068) 1:12:17.203 ******* 2026-02-02 06:46:49.153440 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-02 06:46:49.153447 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-02 06:46:49.153453 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-02 06:46:49.153460 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:46:49.153466 | orchestrator | 2026-02-02 06:46:49.153473 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-02 06:46:49.153481 | orchestrator | Monday 02 February 2026 06:45:50 +0000 (0:00:01.061) 1:12:18.264 ******* 2026-02-02 06:46:49.153487 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:46:49.153494 | orchestrator | 2026-02-02 06:46:49.153501 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-02 06:46:49.153507 | orchestrator | Monday 02 February 2026 06:45:51 +0000 (0:00:00.844) 1:12:19.108 ******* 2026-02-02 06:46:49.153514 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-02 06:46:49.153521 | orchestrator | 2026-02-02 06:46:49.153527 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-02 06:46:49.153534 | orchestrator | Monday 02 February 2026 06:45:52 +0000 (0:00:00.965) 1:12:20.073 ******* 2026-02-02 06:46:49.153541 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:46:49.153547 | orchestrator | 2026-02-02 06:46:49.153554 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-02 06:46:49.153560 | orchestrator | Monday 02 February 2026 06:45:54 +0000 (0:00:01.900) 1:12:21.974 ******* 2026-02-02 06:46:49.153567 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-02-02 06:46:49.153573 | orchestrator | 2026-02-02 06:46:49.153580 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-02 06:46:49.153587 | orchestrator | Monday 02 February 2026 06:45:55 +0000 (0:00:01.154) 1:12:23.129 ******* 2026-02-02 06:46:49.153593 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:46:49.153600 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-02 06:46:49.153607 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 06:46:49.153613 | orchestrator | 2026-02-02 06:46:49.153620 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:46:49.153626 | orchestrator | Monday 02 February 2026 06:45:58 +0000 (0:00:03.122) 1:12:26.251 ******* 2026-02-02 06:46:49.153633 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-02 06:46:49.153640 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-02 06:46:49.153651 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:46:49.153658 | orchestrator | 2026-02-02 06:46:49.153665 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-02 06:46:49.153671 | orchestrator | Monday 02 February 2026 06:46:00 +0000 (0:00:01.997) 1:12:28.248 ******* 2026-02-02 06:46:49.153678 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:46:49.153685 | orchestrator | 2026-02-02 06:46:49.153691 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-02 06:46:49.153698 | orchestrator | Monday 02 February 2026 06:46:01 +0000 (0:00:00.809) 1:12:29.058 ******* 2026-02-02 06:46:49.153706 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-02-02 06:46:49.153714 | orchestrator | 2026-02-02 06:46:49.153724 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-02 06:46:49.153736 | orchestrator | Monday 02 February 2026 06:46:02 +0000 (0:00:01.132) 1:12:30.191 ******* 2026-02-02 06:46:49.153749 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 06:46:49.153762 | orchestrator | 2026-02-02 06:46:49.153774 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-02 06:46:49.153787 | orchestrator | Monday 02 February 2026 06:46:04 +0000 (0:00:01.623) 1:12:31.814 ******* 2026-02-02 06:46:49.153815 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:46:49.153828 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-02 06:46:49.153840 | orchestrator | 2026-02-02 06:46:49.153870 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-02 06:46:49.153882 | orchestrator | Monday 02 February 2026 06:46:09 +0000 (0:00:05.080) 1:12:36.894 ******* 2026-02-02 06:46:49.153894 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-02 06:46:49.153905 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-02 06:46:49.153916 | orchestrator | 2026-02-02 06:46:49.153927 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-02 06:46:49.153939 | orchestrator | Monday 02 February 2026 06:46:12 +0000 (0:00:03.061) 1:12:39.956 ******* 2026-02-02 06:46:49.153949 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-02 06:46:49.153960 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:46:49.153973 | orchestrator | 2026-02-02 06:46:49.153985 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-02 06:46:49.153996 | orchestrator | Monday 02 February 2026 06:46:14 +0000 (0:00:01.698) 1:12:41.655 ******* 2026-02-02 06:46:49.154008 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-02-02 06:46:49.154075 | orchestrator | 2026-02-02 06:46:49.154084 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-02 06:46:49.154092 | orchestrator | Monday 02 February 2026 06:46:15 +0000 (0:00:01.304) 1:12:42.960 ******* 2026-02-02 06:46:49.154101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:46:49.154109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:46:49.154117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:46:49.154125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:46:49.154133 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:46:49.154139 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:46:49.154153 | orchestrator | 2026-02-02 06:46:49.154160 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-02 06:46:49.154167 | orchestrator | Monday 02 February 2026 06:46:17 +0000 (0:00:01.655) 1:12:44.615 ******* 2026-02-02 06:46:49.154173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:46:49.154180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:46:49.154187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:46:49.154193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:46:49.154200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-02 06:46:49.154207 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:46:49.154213 | orchestrator | 2026-02-02 06:46:49.154220 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-02 06:46:49.154227 | orchestrator | Monday 02 February 2026 06:46:18 +0000 (0:00:01.559) 1:12:46.174 ******* 2026-02-02 06:46:49.154233 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:46:49.154240 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:46:49.154247 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:46:49.154254 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:46:49.154262 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-02 06:46:49.154268 | orchestrator | 2026-02-02 06:46:49.154275 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-02 06:46:49.154282 | orchestrator | Monday 02 February 2026 06:46:48 +0000 (0:00:29.705) 1:13:15.879 ******* 2026-02-02 06:46:49.154289 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:46:49.154295 | orchestrator | 2026-02-02 06:46:49.154302 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-02 06:46:49.154316 | orchestrator | Monday 02 February 2026 06:46:49 +0000 (0:00:00.843) 1:13:16.723 ******* 2026-02-02 06:47:41.803384 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:47:41.803537 | orchestrator | 2026-02-02 06:47:41.803565 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-02 06:47:41.803843 | orchestrator | Monday 02 February 2026 06:46:49 +0000 (0:00:00.777) 1:13:17.500 ******* 2026-02-02 06:47:41.804293 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-02-02 06:47:41.804915 | orchestrator | 2026-02-02 06:47:41.804953 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-02 06:47:41.804971 | orchestrator | Monday 02 February 2026 06:46:51 +0000 (0:00:01.123) 1:13:18.624 ******* 2026-02-02 06:47:41.804987 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-02-02 06:47:41.805004 | orchestrator | 2026-02-02 06:47:41.805021 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-02 06:47:41.805039 | orchestrator | Monday 02 February 2026 06:46:52 +0000 (0:00:01.094) 1:13:19.718 ******* 2026-02-02 06:47:41.805056 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:47:41.805073 | orchestrator | 2026-02-02 06:47:41.805090 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-02 06:47:41.805144 | orchestrator | Monday 02 February 2026 06:46:54 +0000 (0:00:01.996) 1:13:21.715 ******* 2026-02-02 06:47:41.805162 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:47:41.805179 | orchestrator | 2026-02-02 06:47:41.805196 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-02 06:47:41.805213 | orchestrator | Monday 02 February 2026 06:46:56 +0000 (0:00:01.920) 1:13:23.636 ******* 2026-02-02 06:47:41.805229 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:47:41.805246 | orchestrator | 2026-02-02 06:47:41.805262 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-02 06:47:41.805278 | orchestrator | Monday 02 February 2026 06:46:58 +0000 (0:00:02.271) 1:13:25.907 ******* 2026-02-02 06:47:41.805296 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-02 06:47:41.805314 | orchestrator | 2026-02-02 06:47:41.805331 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-02-02 06:47:41.805348 | orchestrator | skipping: no hosts matched 2026-02-02 06:47:41.805364 | orchestrator | 2026-02-02 06:47:41.805380 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-02-02 06:47:41.805398 | orchestrator | skipping: no hosts matched 2026-02-02 06:47:41.805414 | orchestrator | 2026-02-02 06:47:41.805431 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-02-02 06:47:41.805447 | orchestrator | skipping: no hosts matched 2026-02-02 06:47:41.805463 | orchestrator | 2026-02-02 06:47:41.805479 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-02-02 06:47:41.805495 | orchestrator | 2026-02-02 06:47:41.805511 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-02-02 06:47:41.805528 | orchestrator | Monday 02 February 2026 06:47:03 +0000 (0:00:05.233) 1:13:31.141 ******* 2026-02-02 06:47:41.805543 | orchestrator | changed: [testbed-node-0] 2026-02-02 06:47:41.805560 | orchestrator | changed: [testbed-node-1] 2026-02-02 06:47:41.805577 | orchestrator | changed: [testbed-node-2] 2026-02-02 06:47:41.805594 | orchestrator | changed: [testbed-node-3] 2026-02-02 06:47:41.805610 | orchestrator | changed: [testbed-node-4] 2026-02-02 06:47:41.805625 | orchestrator | changed: [testbed-node-5] 2026-02-02 06:47:41.805642 | orchestrator | 2026-02-02 06:47:41.805658 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-02-02 06:47:41.805676 | orchestrator | Monday 02 February 2026 06:47:06 +0000 (0:00:02.564) 1:13:33.706 ******* 2026-02-02 06:47:41.805691 | orchestrator | changed: [testbed-node-3] 2026-02-02 06:47:41.805707 | orchestrator | changed: [testbed-node-1] 2026-02-02 06:47:41.805723 | orchestrator | changed: [testbed-node-0] 2026-02-02 06:47:41.805739 | orchestrator | changed: [testbed-node-4] 2026-02-02 06:47:41.805754 | orchestrator | changed: [testbed-node-2] 2026-02-02 06:47:41.805770 | orchestrator | changed: [testbed-node-5] 2026-02-02 06:47:41.805786 | orchestrator | 2026-02-02 06:47:41.805802 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:47:41.805818 | orchestrator | Monday 02 February 2026 06:47:09 +0000 (0:00:03.342) 1:13:37.048 ******* 2026-02-02 06:47:41.805833 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:47:41.805849 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:47:41.805886 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:47:41.805906 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:47:41.805922 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:47:41.805938 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:47:41.805954 | orchestrator | 2026-02-02 06:47:41.805970 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:47:41.805986 | orchestrator | Monday 02 February 2026 06:47:11 +0000 (0:00:02.355) 1:13:39.404 ******* 2026-02-02 06:47:41.806002 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:47:41.806012 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:47:41.806077 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:47:41.806087 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:47:41.806109 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:47:41.806118 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:47:41.806128 | orchestrator | 2026-02-02 06:47:41.806137 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-02 06:47:41.806147 | orchestrator | Monday 02 February 2026 06:47:13 +0000 (0:00:01.937) 1:13:41.341 ******* 2026-02-02 06:47:41.806158 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 06:47:41.806169 | orchestrator | 2026-02-02 06:47:41.806179 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-02 06:47:41.806190 | orchestrator | Monday 02 February 2026 06:47:16 +0000 (0:00:02.531) 1:13:43.873 ******* 2026-02-02 06:47:41.806208 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 06:47:41.806225 | orchestrator | 2026-02-02 06:47:41.806271 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-02 06:47:41.806290 | orchestrator | Monday 02 February 2026 06:47:18 +0000 (0:00:02.126) 1:13:45.999 ******* 2026-02-02 06:47:41.806306 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:47:41.806324 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:47:41.806343 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:47:41.806361 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:47:41.806379 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:47:41.806397 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:47:41.806414 | orchestrator | 2026-02-02 06:47:41.806433 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-02 06:47:41.806452 | orchestrator | Monday 02 February 2026 06:47:20 +0000 (0:00:02.329) 1:13:48.329 ******* 2026-02-02 06:47:41.806470 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:47:41.806482 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:47:41.806490 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:47:41.806497 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:47:41.806505 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:47:41.806513 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:47:41.806521 | orchestrator | 2026-02-02 06:47:41.806529 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-02 06:47:41.806537 | orchestrator | Monday 02 February 2026 06:47:22 +0000 (0:00:02.106) 1:13:50.435 ******* 2026-02-02 06:47:41.806544 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:47:41.806552 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:47:41.806560 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:47:41.806568 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:47:41.806576 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:47:41.806584 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:47:41.806591 | orchestrator | 2026-02-02 06:47:41.806599 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-02 06:47:41.806607 | orchestrator | Monday 02 February 2026 06:47:25 +0000 (0:00:02.442) 1:13:52.878 ******* 2026-02-02 06:47:41.806615 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:47:41.806622 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:47:41.806630 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:47:41.806638 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:47:41.806650 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:47:41.806662 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:47:41.806676 | orchestrator | 2026-02-02 06:47:41.806688 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-02 06:47:41.806701 | orchestrator | Monday 02 February 2026 06:47:27 +0000 (0:00:02.051) 1:13:54.930 ******* 2026-02-02 06:47:41.806715 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:47:41.806729 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:47:41.806737 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:47:41.806745 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:47:41.806761 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:47:41.806769 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:47:41.806777 | orchestrator | 2026-02-02 06:47:41.806785 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-02 06:47:41.806793 | orchestrator | Monday 02 February 2026 06:47:29 +0000 (0:00:02.174) 1:13:57.104 ******* 2026-02-02 06:47:41.806800 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:47:41.806808 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:47:41.806816 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:47:41.806824 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:47:41.806831 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:47:41.806839 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:47:41.806847 | orchestrator | 2026-02-02 06:47:41.806855 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-02 06:47:41.806863 | orchestrator | Monday 02 February 2026 06:47:31 +0000 (0:00:01.799) 1:13:58.904 ******* 2026-02-02 06:47:41.806925 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:47:41.806939 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:47:41.806951 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:47:41.806964 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:47:41.806977 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:47:41.806991 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:47:41.807005 | orchestrator | 2026-02-02 06:47:41.807018 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-02 06:47:41.807031 | orchestrator | Monday 02 February 2026 06:47:33 +0000 (0:00:01.742) 1:14:00.646 ******* 2026-02-02 06:47:41.807043 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:47:41.807051 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:47:41.807059 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:47:41.807067 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:47:41.807075 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:47:41.807083 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:47:41.807091 | orchestrator | 2026-02-02 06:47:41.807099 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-02 06:47:41.807107 | orchestrator | Monday 02 February 2026 06:47:35 +0000 (0:00:02.536) 1:14:03.183 ******* 2026-02-02 06:47:41.807115 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:47:41.807122 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:47:41.807130 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:47:41.807138 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:47:41.807146 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:47:41.807153 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:47:41.807161 | orchestrator | 2026-02-02 06:47:41.807169 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-02 06:47:41.807177 | orchestrator | Monday 02 February 2026 06:47:37 +0000 (0:00:02.233) 1:14:05.416 ******* 2026-02-02 06:47:41.807185 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:47:41.807193 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:47:41.807201 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:47:41.807209 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:47:41.807217 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:47:41.807224 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:47:41.807232 | orchestrator | 2026-02-02 06:47:41.807240 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-02 06:47:41.807248 | orchestrator | Monday 02 February 2026 06:47:39 +0000 (0:00:02.129) 1:14:07.546 ******* 2026-02-02 06:47:41.807256 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:47:41.807263 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:47:41.807271 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:47:41.807279 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:47:41.807287 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:47:41.807295 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:47:41.807302 | orchestrator | 2026-02-02 06:47:41.807320 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-02 06:48:36.807568 | orchestrator | Monday 02 February 2026 06:47:41 +0000 (0:00:01.826) 1:14:09.372 ******* 2026-02-02 06:48:36.807687 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:48:36.807704 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:48:36.807717 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:48:36.807728 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:48:36.807740 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:48:36.807751 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:48:36.807762 | orchestrator | 2026-02-02 06:48:36.807773 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-02 06:48:36.807785 | orchestrator | Monday 02 February 2026 06:47:43 +0000 (0:00:02.132) 1:14:11.505 ******* 2026-02-02 06:48:36.807795 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:48:36.807806 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:48:36.807817 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:48:36.807828 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:48:36.807838 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:48:36.807849 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:48:36.807861 | orchestrator | 2026-02-02 06:48:36.807926 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-02 06:48:36.808043 | orchestrator | Monday 02 February 2026 06:47:45 +0000 (0:00:01.993) 1:14:13.499 ******* 2026-02-02 06:48:36.808068 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:48:36.808089 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:48:36.808166 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:48:36.808190 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:48:36.808210 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:48:36.808229 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:48:36.808247 | orchestrator | 2026-02-02 06:48:36.808282 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-02 06:48:36.808302 | orchestrator | Monday 02 February 2026 06:47:47 +0000 (0:00:02.072) 1:14:15.571 ******* 2026-02-02 06:48:36.808444 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:48:36.808466 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:48:36.808542 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:48:36.808563 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:48:36.808582 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:48:36.808600 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:48:36.808620 | orchestrator | 2026-02-02 06:48:36.808639 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-02 06:48:36.808660 | orchestrator | Monday 02 February 2026 06:47:49 +0000 (0:00:01.863) 1:14:17.435 ******* 2026-02-02 06:48:36.808723 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:48:36.808746 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:48:36.808767 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:48:36.808787 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:48:36.808805 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:48:36.808824 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:48:36.808842 | orchestrator | 2026-02-02 06:48:36.808861 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-02 06:48:36.808873 | orchestrator | Monday 02 February 2026 06:47:51 +0000 (0:00:01.821) 1:14:19.257 ******* 2026-02-02 06:48:36.808914 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:48:36.808935 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:48:36.808956 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:48:36.808994 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:48:36.809015 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:48:36.809034 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:48:36.809054 | orchestrator | 2026-02-02 06:48:36.809073 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-02 06:48:36.809094 | orchestrator | Monday 02 February 2026 06:47:53 +0000 (0:00:01.775) 1:14:21.032 ******* 2026-02-02 06:48:36.809113 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:48:36.809134 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:48:36.809343 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:48:36.809360 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:48:36.809379 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:48:36.809396 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:48:36.809415 | orchestrator | 2026-02-02 06:48:36.809435 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-02 06:48:36.809453 | orchestrator | Monday 02 February 2026 06:47:55 +0000 (0:00:01.799) 1:14:22.832 ******* 2026-02-02 06:48:36.809472 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:48:36.809493 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:48:36.809512 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:48:36.809531 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:48:36.809543 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:48:36.809554 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:48:36.809564 | orchestrator | 2026-02-02 06:48:36.809583 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-02 06:48:36.809601 | orchestrator | Monday 02 February 2026 06:47:57 +0000 (0:00:02.241) 1:14:25.073 ******* 2026-02-02 06:48:36.809616 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:48:36.809635 | orchestrator | 2026-02-02 06:48:36.809654 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-02 06:48:36.809672 | orchestrator | Monday 02 February 2026 06:48:00 +0000 (0:00:03.062) 1:14:28.135 ******* 2026-02-02 06:48:36.809690 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:48:36.809709 | orchestrator | 2026-02-02 06:48:36.809729 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-02 06:48:36.809747 | orchestrator | Monday 02 February 2026 06:48:03 +0000 (0:00:03.044) 1:14:31.180 ******* 2026-02-02 06:48:36.809766 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:48:36.809787 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:48:36.809806 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:48:36.809825 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:48:36.809844 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:48:36.809862 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:48:36.809918 | orchestrator | 2026-02-02 06:48:36.809940 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-02 06:48:36.809959 | orchestrator | Monday 02 February 2026 06:48:06 +0000 (0:00:02.937) 1:14:34.118 ******* 2026-02-02 06:48:36.809970 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:48:36.809981 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:48:36.809992 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:48:36.810002 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:48:36.810013 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:48:36.810088 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:48:36.810100 | orchestrator | 2026-02-02 06:48:36.810111 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-02 06:48:36.810146 | orchestrator | Monday 02 February 2026 06:48:08 +0000 (0:00:02.092) 1:14:36.210 ******* 2026-02-02 06:48:36.810159 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-02 06:48:36.810171 | orchestrator | 2026-02-02 06:48:36.810182 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-02 06:48:36.810193 | orchestrator | Monday 02 February 2026 06:48:11 +0000 (0:00:02.631) 1:14:38.842 ******* 2026-02-02 06:48:36.810203 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:48:36.810214 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:48:36.810225 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:48:36.810235 | orchestrator | ok: [testbed-node-3] 2026-02-02 06:48:36.810246 | orchestrator | ok: [testbed-node-4] 2026-02-02 06:48:36.810256 | orchestrator | ok: [testbed-node-5] 2026-02-02 06:48:36.810267 | orchestrator | 2026-02-02 06:48:36.810277 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-02 06:48:36.810288 | orchestrator | Monday 02 February 2026 06:48:14 +0000 (0:00:02.836) 1:14:41.678 ******* 2026-02-02 06:48:36.810299 | orchestrator | changed: [testbed-node-3] 2026-02-02 06:48:36.810323 | orchestrator | changed: [testbed-node-0] 2026-02-02 06:48:36.810334 | orchestrator | changed: [testbed-node-1] 2026-02-02 06:48:36.810344 | orchestrator | changed: [testbed-node-4] 2026-02-02 06:48:36.810355 | orchestrator | changed: [testbed-node-2] 2026-02-02 06:48:36.810428 | orchestrator | changed: [testbed-node-5] 2026-02-02 06:48:36.810448 | orchestrator | 2026-02-02 06:48:36.810469 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-02-02 06:48:36.810483 | orchestrator | 2026-02-02 06:48:36.810494 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:48:36.810507 | orchestrator | Monday 02 February 2026 06:48:18 +0000 (0:00:04.579) 1:14:46.258 ******* 2026-02-02 06:48:36.810526 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:48:36.810545 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:48:36.810563 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:48:36.810583 | orchestrator | 2026-02-02 06:48:36.810601 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:48:36.810620 | orchestrator | Monday 02 February 2026 06:48:20 +0000 (0:00:01.651) 1:14:47.909 ******* 2026-02-02 06:48:36.810639 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:48:36.810655 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:48:36.810673 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:48:36.810692 | orchestrator | 2026-02-02 06:48:36.810710 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-02 06:48:36.810731 | orchestrator | Monday 02 February 2026 06:48:21 +0000 (0:00:01.587) 1:14:49.497 ******* 2026-02-02 06:48:36.810750 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:48:36.810769 | orchestrator | 2026-02-02 06:48:36.810788 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-02 06:48:36.810808 | orchestrator | Monday 02 February 2026 06:48:24 +0000 (0:00:02.353) 1:14:51.851 ******* 2026-02-02 06:48:36.810826 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:48:36.810845 | orchestrator | 2026-02-02 06:48:36.810865 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-02-02 06:48:36.810878 | orchestrator | 2026-02-02 06:48:36.811037 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-02-02 06:48:36.811062 | orchestrator | Monday 02 February 2026 06:48:26 +0000 (0:00:01.926) 1:14:53.778 ******* 2026-02-02 06:48:36.811081 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:48:36.811100 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:48:36.811201 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:48:36.811230 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:48:36.811249 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:48:36.811268 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:48:36.811289 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:48:36.811308 | orchestrator | 2026-02-02 06:48:36.811328 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 06:48:36.811342 | orchestrator | Monday 02 February 2026 06:48:28 +0000 (0:00:02.244) 1:14:56.022 ******* 2026-02-02 06:48:36.811399 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:48:36.811420 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:48:36.811439 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:48:36.811458 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:48:36.811477 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:48:36.811497 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:48:36.811516 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:48:36.811531 | orchestrator | 2026-02-02 06:48:36.811547 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-02 06:48:36.811567 | orchestrator | Monday 02 February 2026 06:48:30 +0000 (0:00:02.429) 1:14:58.451 ******* 2026-02-02 06:48:36.811586 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:48:36.811604 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:48:36.811623 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:48:36.811634 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:48:36.811651 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:48:36.811687 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:48:36.811705 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:48:36.811725 | orchestrator | 2026-02-02 06:48:36.811745 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-02 06:48:36.811764 | orchestrator | Monday 02 February 2026 06:48:33 +0000 (0:00:02.482) 1:15:00.934 ******* 2026-02-02 06:48:36.811784 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:48:36.811797 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:48:36.811808 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:48:36.811818 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:48:36.811829 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:48:36.811840 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:48:36.811850 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:48:36.811861 | orchestrator | 2026-02-02 06:48:36.811872 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-02-02 06:48:36.811927 | orchestrator | Monday 02 February 2026 06:48:35 +0000 (0:00:02.518) 1:15:03.453 ******* 2026-02-02 06:48:36.811939 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:48:36.811949 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:48:36.811960 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:48:36.811997 | orchestrator | skipping: [testbed-node-3] 2026-02-02 06:49:25.573545 | orchestrator | skipping: [testbed-node-4] 2026-02-02 06:49:25.573660 | orchestrator | skipping: [testbed-node-5] 2026-02-02 06:49:25.573676 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.573688 | orchestrator | 2026-02-02 06:49:25.573701 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-02-02 06:49:25.573713 | orchestrator | 2026-02-02 06:49:25.573724 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-02-02 06:49:25.573735 | orchestrator | Monday 02 February 2026 06:48:38 +0000 (0:00:03.034) 1:15:06.488 ******* 2026-02-02 06:49:25.573746 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-02-02 06:49:25.573758 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-02-02 06:49:25.573769 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-02-02 06:49:25.573780 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.573792 | orchestrator | 2026-02-02 06:49:25.573803 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-02 06:49:25.573814 | orchestrator | Monday 02 February 2026 06:48:40 +0000 (0:00:01.283) 1:15:07.772 ******* 2026-02-02 06:49:25.573825 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.573835 | orchestrator | 2026-02-02 06:49:25.573846 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-02 06:49:25.573857 | orchestrator | Monday 02 February 2026 06:48:41 +0000 (0:00:01.101) 1:15:08.873 ******* 2026-02-02 06:49:25.573868 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.573879 | orchestrator | 2026-02-02 06:49:25.573889 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-02 06:49:25.574005 | orchestrator | Monday 02 February 2026 06:48:42 +0000 (0:00:01.130) 1:15:10.003 ******* 2026-02-02 06:49:25.574078 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.574092 | orchestrator | 2026-02-02 06:49:25.574106 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-02 06:49:25.574119 | orchestrator | Monday 02 February 2026 06:48:43 +0000 (0:00:01.130) 1:15:11.134 ******* 2026-02-02 06:49:25.574132 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.574144 | orchestrator | 2026-02-02 06:49:25.574157 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-02-02 06:49:25.574169 | orchestrator | Monday 02 February 2026 06:48:44 +0000 (0:00:01.121) 1:15:12.255 ******* 2026-02-02 06:49:25.574182 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-02-02 06:49:25.574195 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-02-02 06:49:25.574208 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.574244 | orchestrator | 2026-02-02 06:49:25.574258 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-02-02 06:49:25.574271 | orchestrator | Monday 02 February 2026 06:48:45 +0000 (0:00:01.169) 1:15:13.426 ******* 2026-02-02 06:49:25.574283 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.574296 | orchestrator | 2026-02-02 06:49:25.574308 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-02-02 06:49:25.574321 | orchestrator | Monday 02 February 2026 06:48:46 +0000 (0:00:01.113) 1:15:14.539 ******* 2026-02-02 06:49:25.574333 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.574346 | orchestrator | 2026-02-02 06:49:25.574359 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-02-02 06:49:25.574372 | orchestrator | Monday 02 February 2026 06:48:48 +0000 (0:00:01.107) 1:15:15.647 ******* 2026-02-02 06:49:25.574385 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.574397 | orchestrator | 2026-02-02 06:49:25.574410 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-02-02 06:49:25.574423 | orchestrator | Monday 02 February 2026 06:48:49 +0000 (0:00:01.169) 1:15:16.816 ******* 2026-02-02 06:49:25.574435 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-02-02 06:49:25.574446 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-02-02 06:49:25.574456 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.574467 | orchestrator | 2026-02-02 06:49:25.574478 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-02-02 06:49:25.574488 | orchestrator | Monday 02 February 2026 06:48:50 +0000 (0:00:01.320) 1:15:18.136 ******* 2026-02-02 06:49:25.574499 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.574509 | orchestrator | 2026-02-02 06:49:25.574520 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-02-02 06:49:25.574531 | orchestrator | Monday 02 February 2026 06:48:51 +0000 (0:00:01.190) 1:15:19.326 ******* 2026-02-02 06:49:25.574541 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.574552 | orchestrator | 2026-02-02 06:49:25.574563 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-02-02 06:49:25.574573 | orchestrator | Monday 02 February 2026 06:48:52 +0000 (0:00:01.096) 1:15:20.423 ******* 2026-02-02 06:49:25.574584 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.574594 | orchestrator | 2026-02-02 06:49:25.574605 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-02-02 06:49:25.574616 | orchestrator | Monday 02 February 2026 06:48:53 +0000 (0:00:01.121) 1:15:21.544 ******* 2026-02-02 06:49:25.574626 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:25.574637 | orchestrator | 2026-02-02 06:49:25.574648 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-02-02 06:49:25.574659 | orchestrator | 2026-02-02 06:49:25.574669 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-02 06:49:25.574680 | orchestrator | Monday 02 February 2026 06:48:55 +0000 (0:00:01.617) 1:15:23.161 ******* 2026-02-02 06:49:25.574691 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:49:25.574701 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:49:25.574712 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:49:25.574723 | orchestrator | 2026-02-02 06:49:25.574733 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-02 06:49:25.574744 | orchestrator | Monday 02 February 2026 06:48:57 +0000 (0:00:01.679) 1:15:24.841 ******* 2026-02-02 06:49:25.574755 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:49:25.574766 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:49:25.574794 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:49:25.574805 | orchestrator | 2026-02-02 06:49:25.574816 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-02 06:49:25.574827 | orchestrator | Monday 02 February 2026 06:48:58 +0000 (0:00:01.342) 1:15:26.183 ******* 2026-02-02 06:49:25.574838 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:49:25.574857 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:49:25.574868 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:49:25.574878 | orchestrator | 2026-02-02 06:49:25.574889 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-02 06:49:25.574928 | orchestrator | Monday 02 February 2026 06:48:59 +0000 (0:00:01.342) 1:15:27.525 ******* 2026-02-02 06:49:25.574940 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:49:25.574951 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:49:25.574961 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:49:25.574972 | orchestrator | 2026-02-02 06:49:25.574983 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-02 06:49:25.574994 | orchestrator | Monday 02 February 2026 06:49:01 +0000 (0:00:01.508) 1:15:29.034 ******* 2026-02-02 06:49:25.575004 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:49:25.575015 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:49:25.575026 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:49:25.575037 | orchestrator | 2026-02-02 06:49:25.575048 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-02-02 06:49:25.575058 | orchestrator | Monday 02 February 2026 06:49:02 +0000 (0:00:01.374) 1:15:30.409 ******* 2026-02-02 06:49:25.575069 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:49:25.575080 | orchestrator | skipping: [testbed-node-1] 2026-02-02 06:49:25.575090 | orchestrator | skipping: [testbed-node-2] 2026-02-02 06:49:25.575101 | orchestrator | 2026-02-02 06:49:25.575112 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-02-02 06:49:25.575123 | orchestrator | Monday 02 February 2026 06:49:04 +0000 (0:00:01.339) 1:15:31.748 ******* 2026-02-02 06:49:25.575133 | orchestrator | skipping: [testbed-node-0] 2026-02-02 06:49:25.575144 | orchestrator | 2026-02-02 06:49:25.575155 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-02-02 06:49:25.575165 | orchestrator | 2026-02-02 06:49:25.575176 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-02 06:49:25.575187 | orchestrator | Monday 02 February 2026 06:49:06 +0000 (0:00:01.911) 1:15:33.660 ******* 2026-02-02 06:49:25.575198 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:49:25.575208 | orchestrator | 2026-02-02 06:49:25.575219 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-02 06:49:25.575230 | orchestrator | Monday 02 February 2026 06:49:07 +0000 (0:00:01.429) 1:15:35.090 ******* 2026-02-02 06:49:25.575240 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:49:25.575251 | orchestrator | 2026-02-02 06:49:25.575262 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-02-02 06:49:25.575273 | orchestrator | Monday 02 February 2026 06:49:08 +0000 (0:00:01.180) 1:15:36.271 ******* 2026-02-02 06:49:25.575284 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:49:25.575294 | orchestrator | 2026-02-02 06:49:25.575305 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-02-02 06:49:25.575316 | orchestrator | Monday 02 February 2026 06:49:09 +0000 (0:00:01.155) 1:15:37.426 ******* 2026-02-02 06:49:25.575327 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:49:25.575337 | orchestrator | 2026-02-02 06:49:25.575348 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-02-02 06:49:25.575359 | orchestrator | Monday 02 February 2026 06:49:12 +0000 (0:00:03.021) 1:15:40.447 ******* 2026-02-02 06:49:25.575370 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:49:25.575381 | orchestrator | 2026-02-02 06:49:25.575391 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-02-02 06:49:25.575402 | orchestrator | Monday 02 February 2026 06:49:16 +0000 (0:00:03.378) 1:15:43.826 ******* 2026-02-02 06:49:25.575413 | orchestrator | changed: [testbed-node-0] 2026-02-02 06:49:25.575424 | orchestrator | 2026-02-02 06:49:25.575434 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-02-02 06:49:25.575445 | orchestrator | 2026-02-02 06:49:25.575456 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-02-02 06:49:25.575492 | orchestrator | Monday 02 February 2026 06:49:18 +0000 (0:00:01.885) 1:15:45.711 ******* 2026-02-02 06:49:25.575513 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:49:25.575525 | orchestrator | ok: [testbed-node-1] 2026-02-02 06:49:25.575536 | orchestrator | ok: [testbed-node-2] 2026-02-02 06:49:25.575547 | orchestrator | 2026-02-02 06:49:25.575558 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-02-02 06:49:25.575568 | orchestrator | Monday 02 February 2026 06:49:19 +0000 (0:00:01.759) 1:15:47.471 ******* 2026-02-02 06:49:25.575579 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:49:25.575590 | orchestrator | 2026-02-02 06:49:25.575601 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-02-02 06:49:25.575612 | orchestrator | Monday 02 February 2026 06:49:22 +0000 (0:00:02.262) 1:15:49.733 ******* 2026-02-02 06:49:25.575622 | orchestrator | ok: [testbed-node-0] 2026-02-02 06:49:25.575633 | orchestrator | 2026-02-02 06:49:25.575644 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 06:49:25.575656 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-02 06:49:25.575669 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-02-02 06:49:25.575681 | orchestrator | testbed-node-0 : ok=248  changed=20  unreachable=0 failed=0 skipped=376  rescued=0 ignored=0 2026-02-02 06:49:25.575692 | orchestrator | testbed-node-1 : ok=191  changed=15  unreachable=0 failed=0 skipped=350  rescued=0 ignored=0 2026-02-02 06:49:25.575710 | orchestrator | testbed-node-2 : ok=196  changed=16  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-02-02 06:49:26.339010 | orchestrator | testbed-node-3 : ok=316  changed=22  unreachable=0 failed=0 skipped=362  rescued=0 ignored=0 2026-02-02 06:49:26.339097 | orchestrator | testbed-node-4 : ok=302  changed=18  unreachable=0 failed=0 skipped=345  rescued=0 ignored=0 2026-02-02 06:49:26.339108 | orchestrator | testbed-node-5 : ok=309  changed=17  unreachable=0 failed=0 skipped=358  rescued=0 ignored=0 2026-02-02 06:49:26.339117 | orchestrator | 2026-02-02 06:49:26.339126 | orchestrator | 2026-02-02 06:49:26.339134 | orchestrator | 2026-02-02 06:49:26.339142 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 06:49:26.339151 | orchestrator | Monday 02 February 2026 06:49:25 +0000 (0:00:03.388) 1:15:53.121 ******* 2026-02-02 06:49:26.339159 | orchestrator | =============================================================================== 2026-02-02 06:49:26.339167 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 76.42s 2026-02-02 06:49:26.339175 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 74.33s 2026-02-02 06:49:26.339183 | orchestrator | Gather and delegate facts ---------------------------------------------- 33.17s 2026-02-02 06:49:26.339191 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 32.80s 2026-02-02 06:49:26.339198 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.59s 2026-02-02 06:49:26.339206 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.19s 2026-02-02 06:49:26.339214 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.71s 2026-02-02 06:49:26.339222 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 29.04s 2026-02-02 06:49:26.339230 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 27.86s 2026-02-02 06:49:26.339238 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.97s 2026-02-02 06:49:26.339268 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.86s 2026-02-02 06:49:26.339276 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 21.91s 2026-02-02 06:49:26.339285 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.51s 2026-02-02 06:49:26.339293 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.52s 2026-02-02 06:49:26.339301 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.24s 2026-02-02 06:49:26.339309 | orchestrator | Stop ceph osd ---------------------------------------------------------- 12.77s 2026-02-02 06:49:26.339323 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.47s 2026-02-02 06:49:26.339337 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.45s 2026-02-02 06:49:26.339351 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.41s 2026-02-02 06:49:26.339365 | orchestrator | Stop ceph mon ---------------------------------------------------------- 10.97s 2026-02-02 06:49:26.710259 | orchestrator | + osism apply cephclient 2026-02-02 06:49:28.808775 | orchestrator | 2026-02-02 06:49:28 | INFO  | Task af047d6f-ba8b-40e6-a9ec-554ab8e1626b (cephclient) was prepared for execution. 2026-02-02 06:49:28.808868 | orchestrator | 2026-02-02 06:49:28 | INFO  | It takes a moment until task af047d6f-ba8b-40e6-a9ec-554ab8e1626b (cephclient) has been started and output is visible here. 2026-02-02 06:49:47.772184 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-02 06:49:47.772331 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-02 06:49:47.772375 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-02 06:49:47.772395 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-02 06:49:47.772431 | orchestrator | 2026-02-02 06:49:47.772451 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-02 06:49:47.772471 | orchestrator | 2026-02-02 06:49:47.772491 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-02 06:49:47.772512 | orchestrator | Monday 02 February 2026 06:49:35 +0000 (0:00:01.791) 0:00:01.791 ******* 2026-02-02 06:49:47.772532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-02 06:49:47.772553 | orchestrator | 2026-02-02 06:49:47.772573 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-02 06:49:47.772593 | orchestrator | Monday 02 February 2026 06:49:36 +0000 (0:00:00.852) 0:00:02.644 ******* 2026-02-02 06:49:47.772613 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-02 06:49:47.772633 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-02 06:49:47.772654 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-02 06:49:47.772674 | orchestrator | 2026-02-02 06:49:47.772695 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-02 06:49:47.772717 | orchestrator | Monday 02 February 2026 06:49:37 +0000 (0:00:01.756) 0:00:04.401 ******* 2026-02-02 06:49:47.772738 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-02 06:49:47.772758 | orchestrator | 2026-02-02 06:49:47.772778 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-02 06:49:47.772797 | orchestrator | Monday 02 February 2026 06:49:38 +0000 (0:00:01.064) 0:00:05.465 ******* 2026-02-02 06:49:47.772817 | orchestrator | ok: [testbed-manager] 2026-02-02 06:49:47.772837 | orchestrator | 2026-02-02 06:49:47.772858 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-02 06:49:47.772951 | orchestrator | Monday 02 February 2026 06:49:39 +0000 (0:00:00.904) 0:00:06.370 ******* 2026-02-02 06:49:47.772974 | orchestrator | ok: [testbed-manager] 2026-02-02 06:49:47.772995 | orchestrator | 2026-02-02 06:49:47.773016 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-02 06:49:47.773035 | orchestrator | Monday 02 February 2026 06:49:40 +0000 (0:00:00.928) 0:00:07.298 ******* 2026-02-02 06:49:47.773055 | orchestrator | ok: [testbed-manager] 2026-02-02 06:49:47.773075 | orchestrator | 2026-02-02 06:49:47.773095 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-02 06:49:47.773116 | orchestrator | Monday 02 February 2026 06:49:41 +0000 (0:00:01.107) 0:00:08.406 ******* 2026-02-02 06:49:47.773135 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-02 06:49:47.773151 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-02-02 06:49:47.773162 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-02 06:49:47.773173 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-02 06:49:47.773184 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-02 06:49:47.773195 | orchestrator | 2026-02-02 06:49:47.773205 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-02 06:49:47.773216 | orchestrator | Monday 02 February 2026 06:49:45 +0000 (0:00:03.885) 0:00:12.291 ******* 2026-02-02 06:49:47.773227 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-02 06:49:47.773238 | orchestrator | 2026-02-02 06:49:47.773248 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-02 06:49:47.773259 | orchestrator | Monday 02 February 2026 06:49:46 +0000 (0:00:00.504) 0:00:12.796 ******* 2026-02-02 06:49:47.773270 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:47.773281 | orchestrator | 2026-02-02 06:49:47.773291 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-02 06:49:47.773302 | orchestrator | Monday 02 February 2026 06:49:46 +0000 (0:00:00.152) 0:00:12.948 ******* 2026-02-02 06:49:47.773312 | orchestrator | skipping: [testbed-manager] 2026-02-02 06:49:47.773323 | orchestrator | 2026-02-02 06:49:47.773333 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-02 06:49:47.773344 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-02 06:49:47.773356 | orchestrator | 2026-02-02 06:49:47.773367 | orchestrator | 2026-02-02 06:49:47.773378 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-02 06:49:47.773388 | orchestrator | Monday 02 February 2026 06:49:47 +0000 (0:00:01.128) 0:00:14.076 ******* 2026-02-02 06:49:47.773399 | orchestrator | =============================================================================== 2026-02-02 06:49:47.773409 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.89s 2026-02-02 06:49:47.773420 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.76s 2026-02-02 06:49:47.773431 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.13s 2026-02-02 06:49:47.773441 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 1.11s 2026-02-02 06:49:47.773452 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.06s 2026-02-02 06:49:47.773462 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.93s 2026-02-02 06:49:47.773498 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.90s 2026-02-02 06:49:47.773509 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.85s 2026-02-02 06:49:47.773519 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-02-02 06:49:47.773530 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-02-02 06:49:48.088599 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-02 06:49:48.088726 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-02-02 06:49:48.096716 | orchestrator | + set -e 2026-02-02 06:49:48.096801 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-02 06:49:48.096822 | orchestrator | ++ export INTERACTIVE=false 2026-02-02 06:49:48.096841 | orchestrator | ++ INTERACTIVE=false 2026-02-02 06:49:48.096859 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-02 06:49:48.096875 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-02 06:49:48.096890 | orchestrator | + source /opt/manager-vars.sh 2026-02-02 06:49:48.096934 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-02 06:49:48.096950 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-02 06:49:48.096966 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-02 06:49:48.096982 | orchestrator | ++ CEPH_VERSION=reef 2026-02-02 06:49:48.096998 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-02 06:49:48.097013 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-02 06:49:48.097028 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-02 06:49:48.097045 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-02 06:49:48.097060 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-02 06:49:48.097076 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-02 06:49:48.097093 | orchestrator | ++ export ARA=false 2026-02-02 06:49:48.097109 | orchestrator | ++ ARA=false 2026-02-02 06:49:48.097124 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-02 06:49:48.097141 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-02 06:49:48.097156 | orchestrator | ++ export TEMPEST=false 2026-02-02 06:49:48.097169 | orchestrator | ++ TEMPEST=false 2026-02-02 06:49:48.097184 | orchestrator | ++ export IS_ZUUL=true 2026-02-02 06:49:48.097200 | orchestrator | ++ IS_ZUUL=true 2026-02-02 06:49:48.097215 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 06:49:48.097231 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.102 2026-02-02 06:49:48.097248 | orchestrator | ++ export EXTERNAL_API=false 2026-02-02 06:49:48.097264 | orchestrator | ++ EXTERNAL_API=false 2026-02-02 06:49:48.097281 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-02 06:49:48.097297 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-02 06:49:48.097310 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-02 06:49:48.097322 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-02 06:49:48.097334 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-02 06:49:48.097346 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-02 06:49:48.097357 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-02 06:49:48.097367 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-02 06:49:48.097377 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-02 06:49:48.098071 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-02 06:49:48.102357 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-02 06:49:48.102412 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-02 06:49:48.102431 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-02 06:49:48.102450 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-02-02 06:50:09.977564 | orchestrator | 2026-02-02 06:50:09 | ERROR  | Unable to get ansible vault password 2026-02-02 06:50:09.977649 | orchestrator | 2026-02-02 06:50:09 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-02 06:50:09.977662 | orchestrator | 2026-02-02 06:50:09 | ERROR  | Dropping encrypted entries 2026-02-02 06:50:10.016147 | orchestrator | 2026-02-02 06:50:10 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-02 06:50:10.016683 | orchestrator | 2026-02-02 06:50:10 | INFO  | Kolla configuration check passed 2026-02-02 06:50:10.208995 | orchestrator | 2026-02-02 06:50:10 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-02-02 06:50:10.224575 | orchestrator | 2026-02-02 06:50:10 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-02-02 06:50:10.554115 | orchestrator | + osism migrate rabbitmq3to4 list 2026-02-02 06:50:31.694379 | orchestrator | 2026-02-02 06:50:31 | ERROR  | Unable to get ansible vault password 2026-02-02 06:50:31.694491 | orchestrator | 2026-02-02 06:50:31 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-02 06:50:31.694508 | orchestrator | 2026-02-02 06:50:31 | ERROR  | Dropping encrypted entries 2026-02-02 06:50:31.744890 | orchestrator | 2026-02-02 06:50:31 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-02 06:50:31.870591 | orchestrator | 2026-02-02 06:50:31 | INFO  | Found 206 classic queue(s) in vhost '/': 2026-02-02 06:50:31.870679 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-02-02 06:50:31.870692 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-02-02 06:50:31.870797 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-02-02 06:50:31.870809 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-02-02 06:50:31.870820 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - barbican.workers_fanout_36a0475487094b6686fde74301f41f75 (vhost: /, messages: 0) 2026-02-02 06:50:31.870832 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - barbican.workers_fanout_4c399d57e07c4942aad7daaf833ee904 (vhost: /, messages: 0) 2026-02-02 06:50:31.870842 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - barbican.workers_fanout_a23d5864f5e34011a4611a21d4c7ff9b (vhost: /, messages: 0) 2026-02-02 06:50:31.870851 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-02-02 06:50:31.870861 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - central (vhost: /, messages: 0) 2026-02-02 06:50:31.870882 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.870892 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.870901 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.871104 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - central_fanout_3492085795114eff93d89719f113005f (vhost: /, messages: 0) 2026-02-02 06:50:31.871120 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - central_fanout_3596629dbcb348cb9a926cc58ba2132f (vhost: /, messages: 0) 2026-02-02 06:50:31.871130 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - central_fanout_458fafdaa1a94ed7acf18c500d87aba1 (vhost: /, messages: 0) 2026-02-02 06:50:31.871278 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - central_fanout_81f3129913b444baaded782155a63920 (vhost: /, messages: 0) 2026-02-02 06:50:31.871357 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - central_fanout_d0ded40cd6854b69b166700bafad325e (vhost: /, messages: 0) 2026-02-02 06:50:31.871429 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - central_fanout_d3395139accd47119a82737f15dd2ae3 (vhost: /, messages: 0) 2026-02-02 06:50:31.871443 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-02-02 06:50:31.871512 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.871524 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.871538 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.871548 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-backup_fanout_1cf22e57aea54abc8e0796ecc540799c (vhost: /, messages: 0) 2026-02-02 06:50:31.871558 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-backup_fanout_32164151852045fcb4f9e859bf6e4ab2 (vhost: /, messages: 0) 2026-02-02 06:50:31.871626 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-backup_fanout_cd1a0db0523a4a86a224aeff7ee6c24e (vhost: /, messages: 0) 2026-02-02 06:50:31.871747 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-02-02 06:50:31.871767 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.871778 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.871788 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.871797 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-scheduler_fanout_06e499a3296f4e05a03f5969a78f01db (vhost: /, messages: 0) 2026-02-02 06:50:31.872002 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-scheduler_fanout_5ab5b84e420d4bbab9a01e16d0d1a8c2 (vhost: /, messages: 0) 2026-02-02 06:50:31.872336 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-scheduler_fanout_dbd5d031d3db49ad9bf87a76b3a13724 (vhost: /, messages: 0) 2026-02-02 06:50:31.872355 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-02-02 06:50:31.872364 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-02-02 06:50:31.872374 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.872384 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_8a849d732988460a9a9a233e9fe40e62 (vhost: /, messages: 0) 2026-02-02 06:50:31.872395 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-02-02 06:50:31.872467 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.872484 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_d95653b0b1534c188023d16517bec543 (vhost: /, messages: 0) 2026-02-02 06:50:31.872568 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-02-02 06:50:31.872640 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.872656 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_3160535296d54735b6c3448950e56970 (vhost: /, messages: 0) 2026-02-02 06:50:31.874224 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-volume_fanout_4fb28e9dda924753982a092a4677942e (vhost: /, messages: 0) 2026-02-02 06:50:31.874301 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-volume_fanout_d2cbdf3411f04625934cee6d02e0ca21 (vhost: /, messages: 0) 2026-02-02 06:50:31.874309 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - cinder-volume_fanout_f7aba5186ac4497eac6b1ef736175387 (vhost: /, messages: 0) 2026-02-02 06:50:31.874315 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - compute (vhost: /, messages: 0) 2026-02-02 06:50:31.874321 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-02-02 06:50:31.874326 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-02-02 06:50:31.874330 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-02-02 06:50:31.874335 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - compute_fanout_25b21c2e35324ed88ced0619e6b13458 (vhost: /, messages: 0) 2026-02-02 06:50:31.874339 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - compute_fanout_4ad64ce91f274f04b532b2077de8f283 (vhost: /, messages: 0) 2026-02-02 06:50:31.874362 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - compute_fanout_fc146fe2ff92461ba7576ce4ade3b9b0 (vhost: /, messages: 0) 2026-02-02 06:50:31.874367 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - conductor (vhost: /, messages: 0) 2026-02-02 06:50:31.874372 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.874376 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.874381 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.874385 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - conductor_fanout_033090e12849403ba314c15fb1186f0f (vhost: /, messages: 0) 2026-02-02 06:50:31.874389 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - conductor_fanout_0a4d013f6dad48d98ad291f843ef295b (vhost: /, messages: 0) 2026-02-02 06:50:31.874394 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - conductor_fanout_39646958b5014388ba7e52752b01f838 (vhost: /, messages: 0) 2026-02-02 06:50:31.874399 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - conductor_fanout_b384597b890c4a86bf9a8f3f43a3b02a (vhost: /, messages: 0) 2026-02-02 06:50:31.874403 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - conductor_fanout_ce13281328494650a652b99ad77d549b (vhost: /, messages: 0) 2026-02-02 06:50:31.874408 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - conductor_fanout_d09c32bf950042f993ec3217596bcacb (vhost: /, messages: 0) 2026-02-02 06:50:31.874412 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - event.sample (vhost: /, messages: 5) 2026-02-02 06:50:31.874425 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-02-02 06:50:31.874429 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - magnum-conductor.e43ub2aa73rc (vhost: /, messages: 0) 2026-02-02 06:50:31.874581 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - magnum-conductor.nw2iinshcmyw (vhost: /, messages: 0) 2026-02-02 06:50:31.874681 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - magnum-conductor.v3uekznoq7eq (vhost: /, messages: 0) 2026-02-02 06:50:31.874696 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - magnum-conductor_fanout_0ea9707466d3490393708c7be260a0cc (vhost: /, messages: 0) 2026-02-02 06:50:31.874750 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - magnum-conductor_fanout_58ae612c4ca04a3394dfe22a4e362b3d (vhost: /, messages: 0) 2026-02-02 06:50:31.874772 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - magnum-conductor_fanout_8b24f328f6884305b3daef95842caab2 (vhost: /, messages: 0) 2026-02-02 06:50:31.874782 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - magnum-conductor_fanout_aff2eb333a464f668474dabf1af3acc1 (vhost: /, messages: 0) 2026-02-02 06:50:31.874793 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - magnum-conductor_fanout_bb9174862a544765b08eeab45286faa9 (vhost: /, messages: 0) 2026-02-02 06:50:31.875136 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - magnum-conductor_fanout_bc78458fd6f74a238cc7feb0057031f6 (vhost: /, messages: 0) 2026-02-02 06:50:31.875155 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - magnum-conductor_fanout_c30971cf1db94aac81c1ec68d9fb1479 (vhost: /, messages: 0) 2026-02-02 06:50:31.875165 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - magnum-conductor_fanout_c56ce5bd444c4daa95b21b5df37bc2e9 (vhost: /, messages: 0) 2026-02-02 06:50:31.875176 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-02-02 06:50:31.875310 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.875340 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.875786 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.875952 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-data_fanout_3ae7b7d67c6848319d094e351585beac (vhost: /, messages: 0) 2026-02-02 06:50:31.875970 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-data_fanout_5e80f8f7e52d424f8eeefe89ddd097cb (vhost: /, messages: 0) 2026-02-02 06:50:31.875980 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-data_fanout_7e4f205883d442efa970699b0c3a13c0 (vhost: /, messages: 0) 2026-02-02 06:50:31.875991 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-02-02 06:50:31.876001 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.876502 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.876521 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.876531 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-scheduler_fanout_362ead7d95f4495e84ab1414abca7eb1 (vhost: /, messages: 0) 2026-02-02 06:50:31.876540 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-scheduler_fanout_390fdef98bcb46b0a871e3f661b01d9b (vhost: /, messages: 0) 2026-02-02 06:50:31.876550 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-02-02 06:50:31.876727 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-02-02 06:50:31.876744 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-02-02 06:50:31.876754 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-02-02 06:50:31.876764 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-share_fanout_124ad3c73364446ebe6ee37692e960f7 (vhost: /, messages: 0) 2026-02-02 06:50:31.876774 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-share_fanout_8e2d3df3ff3e4362bd6688f47adaa617 (vhost: /, messages: 0) 2026-02-02 06:50:31.876784 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - manila-share_fanout_96b23553e0524e6c944a5cb38bd606d9 (vhost: /, messages: 0) 2026-02-02 06:50:31.877303 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-02-02 06:50:31.877322 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-02-02 06:50:31.877332 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-02-02 06:50:31.877341 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-02-02 06:50:31.877351 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-02-02 06:50:31.877613 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-02-02 06:50:31.877632 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-02-02 06:50:31.877703 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-02-02 06:50:31.877715 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.877726 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.877797 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.877810 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - octavia_provisioning_v2_fanout_898b0d9c9ab140a08d70da7f84a5ac09 (vhost: /, messages: 0) 2026-02-02 06:50:31.877952 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - octavia_provisioning_v2_fanout_a278e8664c6542c89de53590030ebc50 (vhost: /, messages: 0) 2026-02-02 06:50:31.877971 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - octavia_provisioning_v2_fanout_c7368257c1604f93ab4d8d1d906e7617 (vhost: /, messages: 0) 2026-02-02 06:50:31.878085 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - producer (vhost: /, messages: 0) 2026-02-02 06:50:31.878760 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.878864 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.878903 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.878948 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - producer_fanout_0610f904958745b185044d09695af0fc (vhost: /, messages: 0) 2026-02-02 06:50:31.879119 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - producer_fanout_093bf1b911f0445aa5246101b5fe3d9b (vhost: /, messages: 0) 2026-02-02 06:50:31.879138 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - producer_fanout_1134d417b83e4856a5fd221116969135 (vhost: /, messages: 0) 2026-02-02 06:50:31.879148 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - producer_fanout_31ff1389e6c64975b2d41b1091251865 (vhost: /, messages: 0) 2026-02-02 06:50:31.879306 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - producer_fanout_582000dbb1df42a9b9f6cc504c41e097 (vhost: /, messages: 0) 2026-02-02 06:50:31.879324 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - producer_fanout_e3a680682dfa451da0191998a3581977 (vhost: /, messages: 0) 2026-02-02 06:50:31.879334 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-02-02 06:50:31.879498 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.879516 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.879526 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.879624 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-plugin_fanout_041af7787bd04bbd92e6d3191431c03d (vhost: /, messages: 0) 2026-02-02 06:50:31.879639 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-plugin_fanout_3aae8f42e5314d049f2c899c9969ab64 (vhost: /, messages: 0) 2026-02-02 06:50:31.879847 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-plugin_fanout_4b3bdd2a47b94c5f9c4992f37365c422 (vhost: /, messages: 0) 2026-02-02 06:50:31.879865 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-plugin_fanout_84e4d6c3ddb648ef9e9d51f3e8ff863c (vhost: /, messages: 0) 2026-02-02 06:50:31.880080 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-plugin_fanout_ae94668c26e04a198ebbea3ce84bf9e5 (vhost: /, messages: 0) 2026-02-02 06:50:31.880099 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-plugin_fanout_b65e3b7a99244614aa24bea1453e8079 (vhost: /, messages: 0) 2026-02-02 06:50:31.880109 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-plugin_fanout_dc37ffd220a5442f9c2aa721562954a9 (vhost: /, messages: 0) 2026-02-02 06:50:31.880119 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-plugin_fanout_e455b52ac9994de7a0644cc1f98550ac (vhost: /, messages: 0) 2026-02-02 06:50:31.880264 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-plugin_fanout_f56c016fb7314403a80179d6df44952d (vhost: /, messages: 0) 2026-02-02 06:50:31.880286 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-02-02 06:50:31.880481 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.880503 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.880576 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.880652 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_0010e739f2c44a46b7da5a7aebc3613b (vhost: /, messages: 0) 2026-02-02 06:50:31.880670 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_055218edf41b43baaf0a1992678d7e39 (vhost: /, messages: 0) 2026-02-02 06:50:31.880770 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_1376e652e0be42149aa6855a511df583 (vhost: /, messages: 0) 2026-02-02 06:50:31.880785 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_15af08643b0a4bccbe8802f6e6e84bbe (vhost: /, messages: 0) 2026-02-02 06:50:31.880858 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_1b7a03d08a534ff58ed6d895fa02da1f (vhost: /, messages: 0) 2026-02-02 06:50:31.881145 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_4ca5c094e7da425a9fd324eb4b73dd0d (vhost: /, messages: 0) 2026-02-02 06:50:31.881165 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_59aa6faa33af42eaaf7796936d2f1f96 (vhost: /, messages: 0) 2026-02-02 06:50:31.881176 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_5eb4561ac9244a5182da07edd42eb663 (vhost: /, messages: 0) 2026-02-02 06:50:31.881238 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_655b9160dcbd4a5db92ab32a48796746 (vhost: /, messages: 0) 2026-02-02 06:50:31.881354 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_8ec4693de3ab42eeb4faad243cec7e1e (vhost: /, messages: 0) 2026-02-02 06:50:31.881477 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_90b400ae91df45aea6a6976154de86d6 (vhost: /, messages: 0) 2026-02-02 06:50:31.881490 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_991bb713fdf34259877151b3715b737a (vhost: /, messages: 0) 2026-02-02 06:50:31.881587 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_b5f22160d57d4366aed5abef8d730f1d (vhost: /, messages: 0) 2026-02-02 06:50:31.881601 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_bf79b4d944174ddf9b9b0a1e7e9bd030 (vhost: /, messages: 0) 2026-02-02 06:50:31.881683 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_dd1153b3f91046879902cdd9272d5500 (vhost: /, messages: 0) 2026-02-02 06:50:31.881777 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_ddf22e2fbdea480493814e49486c6e55 (vhost: /, messages: 0) 2026-02-02 06:50:31.881831 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_e1f4e21efdd04b078a109267fce14d43 (vhost: /, messages: 0) 2026-02-02 06:50:31.882005 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-reports-plugin_fanout_ebc7a5ca29d84de79fc550590e0294bd (vhost: /, messages: 0) 2026-02-02 06:50:31.882053 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-02-02 06:50:31.882180 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.882206 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.882359 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.882374 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-server-resource-versions_fanout_08cf1a0aed89400f86aef28f2bd4f661 (vhost: /, messages: 0) 2026-02-02 06:50:31.882383 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-server-resource-versions_fanout_7005ba537738469e9c98af9771f89518 (vhost: /, messages: 0) 2026-02-02 06:50:31.882467 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-server-resource-versions_fanout_71bfdd8a422b4121a1398c0c804f186f (vhost: /, messages: 0) 2026-02-02 06:50:31.882479 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-server-resource-versions_fanout_9b13988f017144a488b1e31740708cdd (vhost: /, messages: 0) 2026-02-02 06:50:31.882570 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-server-resource-versions_fanout_b7275fa730c94ba79d6949a55340dd2f (vhost: /, messages: 0) 2026-02-02 06:50:31.882660 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-server-resource-versions_fanout_c767a7d35a6043a4b7110eced0ac2653 (vhost: /, messages: 0) 2026-02-02 06:50:31.882673 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-server-resource-versions_fanout_e0bbbeef69014f37b40fd854d3b96fef (vhost: /, messages: 0) 2026-02-02 06:50:31.882778 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-server-resource-versions_fanout_e6b5c5b03a7b40268efb0f2dad812ddf (vhost: /, messages: 0) 2026-02-02 06:50:31.882792 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - q-server-resource-versions_fanout_fda98e3b7ea942aa81625c350ecd25c9 (vhost: /, messages: 0) 2026-02-02 06:50:31.882950 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_04209074d1c94743abafc30e0c495dcd (vhost: /, messages: 0) 2026-02-02 06:50:31.883006 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_08ab0002fe9740769a0c91c24b0cf26c (vhost: /, messages: 0) 2026-02-02 06:50:31.883020 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_0e5b3388b6874f41a216192c7b3002db (vhost: /, messages: 0) 2026-02-02 06:50:31.883028 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_17333140297c4326b950f307c65851ac (vhost: /, messages: 0) 2026-02-02 06:50:31.883140 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_3197ffc5fdae499ba80f8393a2474b81 (vhost: /, messages: 0) 2026-02-02 06:50:31.883194 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_3a20039026b24c37abc44887d0b740f9 (vhost: /, messages: 0) 2026-02-02 06:50:31.883205 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_4156653188824d4fa055e40b6eddda43 (vhost: /, messages: 0) 2026-02-02 06:50:31.883315 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_642efcf48589498d9222bcdf921d308c (vhost: /, messages: 0) 2026-02-02 06:50:31.883364 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_64f3bff610bb49eebc1980fd1c847e06 (vhost: /, messages: 0) 2026-02-02 06:50:31.883377 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_8b3917f3512347358d9a2d5049e7b077 (vhost: /, messages: 0) 2026-02-02 06:50:31.883487 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_8dfe7b8db4f043d2b65690e27459a43f (vhost: /, messages: 0) 2026-02-02 06:50:31.883500 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_9054028b7070480295fe5c164baad97b (vhost: /, messages: 0) 2026-02-02 06:50:31.883556 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_b4e32aa9b75c42cf9b26230e20891c61 (vhost: /, messages: 0) 2026-02-02 06:50:31.883579 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_badbf0366b42449e9720ff80d2f660d7 (vhost: /, messages: 0) 2026-02-02 06:50:31.883675 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_d43328f180ba4f9482d2239b7b33c297 (vhost: /, messages: 0) 2026-02-02 06:50:31.883813 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_dae605e5fff94bf889c795ce190292bd (vhost: /, messages: 0) 2026-02-02 06:50:31.883826 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_e51555968e1d442b96817d696081d216 (vhost: /, messages: 0) 2026-02-02 06:50:31.883955 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_e9030376baaa460c8de0ae25de9ab02a (vhost: /, messages: 0) 2026-02-02 06:50:31.883970 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - reply_f235cbd68a634d52b1871cfaf072d7d3 (vhost: /, messages: 0) 2026-02-02 06:50:31.884237 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-02-02 06:50:31.884252 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.884260 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.884340 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.884352 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - scheduler_fanout_395889ad47934e3baa44fb0325c40732 (vhost: /, messages: 0) 2026-02-02 06:50:31.884361 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - scheduler_fanout_43e12573c6c441c2a4a519f14a7e20d5 (vhost: /, messages: 0) 2026-02-02 06:50:31.884369 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - scheduler_fanout_53b5bccef3e8418b9a01db1a81f581df (vhost: /, messages: 0) 2026-02-02 06:50:31.884535 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - scheduler_fanout_83529395854f432a8f033b424a186bf0 (vhost: /, messages: 0) 2026-02-02 06:50:31.884587 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - scheduler_fanout_a7b44bf6fc6e492cb4817eb13212a2bd (vhost: /, messages: 0) 2026-02-02 06:50:31.884598 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - scheduler_fanout_fcaeaff2be6042d991acf5e624b709a3 (vhost: /, messages: 0) 2026-02-02 06:50:31.884655 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - worker (vhost: /, messages: 0) 2026-02-02 06:50:31.884668 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-02-02 06:50:31.884677 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-02-02 06:50:31.884875 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-02-02 06:50:31.884887 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - worker_fanout_270f32e8a06e4caa8445d586e1b09429 (vhost: /, messages: 0) 2026-02-02 06:50:31.884898 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - worker_fanout_35e0b15df71744d1af483d0931231b90 (vhost: /, messages: 0) 2026-02-02 06:50:31.884906 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - worker_fanout_474db5630c1b432e9ab439c40a5638f2 (vhost: /, messages: 0) 2026-02-02 06:50:31.885075 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - worker_fanout_5f2b18c18c6042e2b7b91f6d0c9240b3 (vhost: /, messages: 0) 2026-02-02 06:50:31.885089 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - worker_fanout_9de68c113a42487e83b7f9f8840bb595 (vhost: /, messages: 0) 2026-02-02 06:50:31.885097 | orchestrator | 2026-02-02 06:50:31 | INFO  |  - worker_fanout_ed679dea44ec4162ac28bf096c3b836b (vhost: /, messages: 0) 2026-02-02 06:50:32.203418 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-02-02 06:50:34.199424 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-02-02 06:50:34.199547 | orchestrator | [--no-close-connections] [--quorum] 2026-02-02 06:50:34.199563 | orchestrator | [--vhost VHOST] 2026-02-02 06:50:34.199574 | orchestrator | [{list,delete,prepare,check}] 2026-02-02 06:50:34.199585 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-02-02 06:50:34.199597 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-02-02 06:50:34.940414 | orchestrator | ERROR 2026-02-02 06:50:34.940623 | orchestrator | { 2026-02-02 06:50:34.940661 | orchestrator | "delta": "2:02:20.293162", 2026-02-02 06:50:34.940685 | orchestrator | "end": "2026-02-02 06:50:34.526846", 2026-02-02 06:50:34.940706 | orchestrator | "msg": "non-zero return code", 2026-02-02 06:50:34.940726 | orchestrator | "rc": 2, 2026-02-02 06:50:34.940744 | orchestrator | "start": "2026-02-02 04:48:14.233684" 2026-02-02 06:50:34.940762 | orchestrator | } failure 2026-02-02 06:50:35.189525 | 2026-02-02 06:50:35.189645 | PLAY RECAP 2026-02-02 06:50:35.189703 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-02 06:50:35.189730 | 2026-02-02 06:50:35.426490 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-02 06:50:35.427818 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-02 06:50:36.181712 | 2026-02-02 06:50:36.181878 | PLAY [Post output play] 2026-02-02 06:50:36.199968 | 2026-02-02 06:50:36.200151 | LOOP [stage-output : Register sources] 2026-02-02 06:50:36.272015 | 2026-02-02 06:50:36.272361 | TASK [stage-output : Check sudo] 2026-02-02 06:50:37.147609 | orchestrator | sudo: a password is required 2026-02-02 06:50:37.314818 | orchestrator | ok: Runtime: 0:00:00.016962 2026-02-02 06:50:37.331671 | 2026-02-02 06:50:37.331834 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-02 06:50:37.372954 | 2026-02-02 06:50:37.373253 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-02 06:50:37.439220 | orchestrator | ok 2026-02-02 06:50:37.449046 | 2026-02-02 06:50:37.449192 | LOOP [stage-output : Ensure target folders exist] 2026-02-02 06:50:37.910164 | orchestrator | ok: "docs" 2026-02-02 06:50:37.910454 | 2026-02-02 06:50:38.164487 | orchestrator | ok: "artifacts" 2026-02-02 06:50:38.419350 | orchestrator | ok: "logs" 2026-02-02 06:50:38.442275 | 2026-02-02 06:50:38.442439 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-02 06:50:38.483642 | 2026-02-02 06:50:38.483937 | TASK [stage-output : Make all log files readable] 2026-02-02 06:50:38.778732 | orchestrator | ok 2026-02-02 06:50:38.787963 | 2026-02-02 06:50:38.788129 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-02 06:50:38.822565 | orchestrator | skipping: Conditional result was False 2026-02-02 06:50:38.839197 | 2026-02-02 06:50:38.839342 | TASK [stage-output : Discover log files for compression] 2026-02-02 06:50:38.863872 | orchestrator | skipping: Conditional result was False 2026-02-02 06:50:38.881208 | 2026-02-02 06:50:38.881395 | LOOP [stage-output : Archive everything from logs] 2026-02-02 06:50:38.927840 | 2026-02-02 06:50:38.928025 | PLAY [Post cleanup play] 2026-02-02 06:50:38.937116 | 2026-02-02 06:50:38.937224 | TASK [Set cloud fact (Zuul deployment)] 2026-02-02 06:50:38.991233 | orchestrator | ok 2026-02-02 06:50:39.001625 | 2026-02-02 06:50:39.001734 | TASK [Set cloud fact (local deployment)] 2026-02-02 06:50:39.025383 | orchestrator | skipping: Conditional result was False 2026-02-02 06:50:39.036371 | 2026-02-02 06:50:39.036495 | TASK [Clean the cloud environment] 2026-02-02 06:50:39.862584 | orchestrator | 2026-02-02 06:50:39 - clean up servers 2026-02-02 06:50:40.668787 | orchestrator | 2026-02-02 06:50:40 - testbed-manager 2026-02-02 06:50:40.758639 | orchestrator | 2026-02-02 06:50:40 - testbed-node-3 2026-02-02 06:50:40.854349 | orchestrator | 2026-02-02 06:50:40 - testbed-node-1 2026-02-02 06:50:40.972064 | orchestrator | 2026-02-02 06:50:40 - testbed-node-5 2026-02-02 06:50:41.060534 | orchestrator | 2026-02-02 06:50:41 - testbed-node-2 2026-02-02 06:50:41.152170 | orchestrator | 2026-02-02 06:50:41 - testbed-node-4 2026-02-02 06:50:41.258688 | orchestrator | 2026-02-02 06:50:41 - testbed-node-0 2026-02-02 06:50:41.344266 | orchestrator | 2026-02-02 06:50:41 - clean up keypairs 2026-02-02 06:50:41.359277 | orchestrator | 2026-02-02 06:50:41 - testbed 2026-02-02 06:50:41.383350 | orchestrator | 2026-02-02 06:50:41 - wait for servers to be gone 2026-02-02 06:50:52.433344 | orchestrator | 2026-02-02 06:50:52 - clean up ports 2026-02-02 06:50:52.619371 | orchestrator | 2026-02-02 06:50:52 - 3637a39c-dfa7-4689-8197-25e5882a33c9 2026-02-02 06:50:52.911975 | orchestrator | 2026-02-02 06:50:52 - 8ad960a9-0988-4001-99b7-f8058562e4cf 2026-02-02 06:50:53.178900 | orchestrator | 2026-02-02 06:50:53 - 8de788e6-91b6-4d30-9f1c-fce2587181c0 2026-02-02 06:50:53.402599 | orchestrator | 2026-02-02 06:50:53 - 90e1df68-daa1-416f-a52e-7bbd5424855d 2026-02-02 06:50:53.803131 | orchestrator | 2026-02-02 06:50:53 - bad893b5-5345-49a4-918f-067ceb5436a2 2026-02-02 06:50:54.074955 | orchestrator | 2026-02-02 06:50:54 - e4b62ab2-ef3a-47ba-91c2-55cc951da39f 2026-02-02 06:50:54.309274 | orchestrator | 2026-02-02 06:50:54 - f9b49fe4-bdf4-4fcc-9b77-0cd94cce9249 2026-02-02 06:50:54.529327 | orchestrator | 2026-02-02 06:50:54 - clean up volumes 2026-02-02 06:50:54.637731 | orchestrator | 2026-02-02 06:50:54 - testbed-volume-1-node-base 2026-02-02 06:50:54.677744 | orchestrator | 2026-02-02 06:50:54 - testbed-volume-0-node-base 2026-02-02 06:50:54.721792 | orchestrator | 2026-02-02 06:50:54 - testbed-volume-4-node-base 2026-02-02 06:50:54.765974 | orchestrator | 2026-02-02 06:50:54 - testbed-volume-3-node-base 2026-02-02 06:50:54.810827 | orchestrator | 2026-02-02 06:50:54 - testbed-volume-manager-base 2026-02-02 06:50:54.854623 | orchestrator | 2026-02-02 06:50:54 - testbed-volume-2-node-base 2026-02-02 06:50:54.896722 | orchestrator | 2026-02-02 06:50:54 - testbed-volume-5-node-base 2026-02-02 06:50:54.938855 | orchestrator | 2026-02-02 06:50:54 - testbed-volume-5-node-5 2026-02-02 06:50:54.981084 | orchestrator | 2026-02-02 06:50:54 - testbed-volume-3-node-3 2026-02-02 06:50:55.021339 | orchestrator | 2026-02-02 06:50:55 - testbed-volume-8-node-5 2026-02-02 06:50:55.064222 | orchestrator | 2026-02-02 06:50:55 - testbed-volume-7-node-4 2026-02-02 06:50:55.109450 | orchestrator | 2026-02-02 06:50:55 - testbed-volume-0-node-3 2026-02-02 06:50:55.149181 | orchestrator | 2026-02-02 06:50:55 - testbed-volume-1-node-4 2026-02-02 06:50:55.188646 | orchestrator | 2026-02-02 06:50:55 - testbed-volume-6-node-3 2026-02-02 06:50:55.227163 | orchestrator | 2026-02-02 06:50:55 - testbed-volume-2-node-5 2026-02-02 06:50:55.270599 | orchestrator | 2026-02-02 06:50:55 - testbed-volume-4-node-4 2026-02-02 06:50:55.310438 | orchestrator | 2026-02-02 06:50:55 - disconnect routers 2026-02-02 06:50:55.491588 | orchestrator | 2026-02-02 06:50:55 - testbed 2026-02-02 06:50:56.462115 | orchestrator | 2026-02-02 06:50:56 - clean up subnets 2026-02-02 06:50:56.517358 | orchestrator | 2026-02-02 06:50:56 - subnet-testbed-management 2026-02-02 06:50:56.686471 | orchestrator | 2026-02-02 06:50:56 - clean up networks 2026-02-02 06:50:57.391162 | orchestrator | 2026-02-02 06:50:57 - net-testbed-management 2026-02-02 06:50:57.672875 | orchestrator | 2026-02-02 06:50:57 - clean up security groups 2026-02-02 06:50:57.712676 | orchestrator | 2026-02-02 06:50:57 - testbed-management 2026-02-02 06:50:57.823885 | orchestrator | 2026-02-02 06:50:57 - testbed-node 2026-02-02 06:50:57.929339 | orchestrator | 2026-02-02 06:50:57 - clean up floating ips 2026-02-02 06:50:57.966852 | orchestrator | 2026-02-02 06:50:57 - 81.163.193.102 2026-02-02 06:50:58.316701 | orchestrator | 2026-02-02 06:50:58 - clean up routers 2026-02-02 06:50:58.434082 | orchestrator | 2026-02-02 06:50:58 - testbed 2026-02-02 06:51:00.095402 | orchestrator | ok: Runtime: 0:00:20.502170 2026-02-02 06:51:00.100254 | 2026-02-02 06:51:00.100413 | PLAY RECAP 2026-02-02 06:51:00.100533 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-02 06:51:00.100595 | 2026-02-02 06:51:00.237425 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-02 06:51:00.238458 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-02 06:51:00.983412 | 2026-02-02 06:51:00.983570 | PLAY [Cleanup play] 2026-02-02 06:51:00.999226 | 2026-02-02 06:51:00.999351 | TASK [Set cloud fact (Zuul deployment)] 2026-02-02 06:51:01.051804 | orchestrator | ok 2026-02-02 06:51:01.061646 | 2026-02-02 06:51:01.062190 | TASK [Set cloud fact (local deployment)] 2026-02-02 06:51:01.097249 | orchestrator | skipping: Conditional result was False 2026-02-02 06:51:01.112602 | 2026-02-02 06:51:01.112751 | TASK [Clean the cloud environment] 2026-02-02 06:51:02.303315 | orchestrator | 2026-02-02 06:51:02 - clean up servers 2026-02-02 06:51:02.773941 | orchestrator | 2026-02-02 06:51:02 - clean up keypairs 2026-02-02 06:51:02.791765 | orchestrator | 2026-02-02 06:51:02 - wait for servers to be gone 2026-02-02 06:51:02.837967 | orchestrator | 2026-02-02 06:51:02 - clean up ports 2026-02-02 06:51:02.923440 | orchestrator | 2026-02-02 06:51:02 - clean up volumes 2026-02-02 06:51:02.988065 | orchestrator | 2026-02-02 06:51:02 - disconnect routers 2026-02-02 06:51:03.013697 | orchestrator | 2026-02-02 06:51:03 - clean up subnets 2026-02-02 06:51:03.033983 | orchestrator | 2026-02-02 06:51:03 - clean up networks 2026-02-02 06:51:03.202300 | orchestrator | 2026-02-02 06:51:03 - clean up security groups 2026-02-02 06:51:03.234067 | orchestrator | 2026-02-02 06:51:03 - clean up floating ips 2026-02-02 06:51:03.259337 | orchestrator | 2026-02-02 06:51:03 - clean up routers 2026-02-02 06:51:03.650319 | orchestrator | ok: Runtime: 0:00:01.373611 2026-02-02 06:51:03.654216 | 2026-02-02 06:51:03.654387 | PLAY RECAP 2026-02-02 06:51:03.654520 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-02 06:51:03.654590 | 2026-02-02 06:51:03.781163 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-02 06:51:03.782246 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-02 06:51:04.524875 | 2026-02-02 06:51:04.525063 | PLAY [Base post-fetch] 2026-02-02 06:51:04.540359 | 2026-02-02 06:51:04.540925 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-02 06:51:04.596701 | orchestrator | skipping: Conditional result was False 2026-02-02 06:51:04.613530 | 2026-02-02 06:51:04.613751 | TASK [fetch-output : Set log path for single node] 2026-02-02 06:51:04.663538 | orchestrator | ok 2026-02-02 06:51:04.672358 | 2026-02-02 06:51:04.672493 | LOOP [fetch-output : Ensure local output dirs] 2026-02-02 06:51:05.150078 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/13c99b7f5ab0455e81e88aef51d00270/work/logs" 2026-02-02 06:51:05.457052 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/13c99b7f5ab0455e81e88aef51d00270/work/artifacts" 2026-02-02 06:51:05.736820 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/13c99b7f5ab0455e81e88aef51d00270/work/docs" 2026-02-02 06:51:05.773824 | 2026-02-02 06:51:05.774024 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-02 06:51:06.681645 | orchestrator | changed: .d..t...... ./ 2026-02-02 06:51:06.681923 | orchestrator | changed: All items complete 2026-02-02 06:51:06.681960 | 2026-02-02 06:51:07.398946 | orchestrator | changed: .d..t...... ./ 2026-02-02 06:51:08.109809 | orchestrator | changed: .d..t...... ./ 2026-02-02 06:51:08.144951 | 2026-02-02 06:51:08.145123 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-02 06:51:08.177947 | orchestrator | skipping: Conditional result was False 2026-02-02 06:51:08.183972 | orchestrator | skipping: Conditional result was False 2026-02-02 06:51:08.203895 | 2026-02-02 06:51:08.204102 | PLAY RECAP 2026-02-02 06:51:08.204187 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-02 06:51:08.204231 | 2026-02-02 06:51:08.325473 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-02 06:51:08.327963 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-02 06:51:09.061909 | 2026-02-02 06:51:09.062122 | PLAY [Base post] 2026-02-02 06:51:09.076937 | 2026-02-02 06:51:09.077122 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-02 06:51:10.073622 | orchestrator | changed 2026-02-02 06:51:10.083518 | 2026-02-02 06:51:10.083639 | PLAY RECAP 2026-02-02 06:51:10.083712 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-02 06:51:10.083787 | 2026-02-02 06:51:10.198743 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-02 06:51:10.202610 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-02 06:51:10.998703 | 2026-02-02 06:51:10.998905 | PLAY [Base post-logs] 2026-02-02 06:51:11.010125 | 2026-02-02 06:51:11.010273 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-02 06:51:11.453292 | localhost | changed 2026-02-02 06:51:11.464319 | 2026-02-02 06:51:11.464466 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-02 06:51:11.501164 | localhost | ok 2026-02-02 06:51:11.505137 | 2026-02-02 06:51:11.505262 | TASK [Set zuul-log-path fact] 2026-02-02 06:51:11.520574 | localhost | ok 2026-02-02 06:51:11.529817 | 2026-02-02 06:51:11.529922 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-02 06:51:11.556116 | localhost | ok 2026-02-02 06:51:11.559816 | 2026-02-02 06:51:11.559934 | TASK [upload-logs : Create log directories] 2026-02-02 06:51:12.049378 | localhost | changed 2026-02-02 06:51:12.054704 | 2026-02-02 06:51:12.054897 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-02 06:51:12.546076 | localhost -> localhost | ok: Runtime: 0:00:00.007167 2026-02-02 06:51:12.550106 | 2026-02-02 06:51:12.550223 | TASK [upload-logs : Upload logs to log server] 2026-02-02 06:51:13.135226 | localhost | Output suppressed because no_log was given 2026-02-02 06:51:13.138491 | 2026-02-02 06:51:13.138650 | LOOP [upload-logs : Compress console log and json output] 2026-02-02 06:51:13.197149 | localhost | skipping: Conditional result was False 2026-02-02 06:51:13.202185 | localhost | skipping: Conditional result was False 2026-02-02 06:51:13.210198 | 2026-02-02 06:51:13.210410 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-02 06:51:13.258150 | localhost | skipping: Conditional result was False 2026-02-02 06:51:13.260334 | 2026-02-02 06:51:13.262106 | localhost | skipping: Conditional result was False 2026-02-02 06:51:13.277182 | 2026-02-02 06:51:13.277407 | LOOP [upload-logs : Upload console log and json output]